Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 7199868 2023-03-09 17:13:19 2023-03-12 07:10:33 2023-03-12 08:21:29 1:10:56 1:01:06 0:09:50 smithi main centos 8.stream rados/singleton-nomsgr/{all/lazy_omap_stats_output mon_election/connectivity rados supported-random-distro$/{centos_8}} 1
Failure Reason:

Command failed on smithi017 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7199869 2023-03-09 17:13:20 2023-03-12 07:10:33 2023-03-12 07:35:34 0:25:01 0:18:43 0:06:18 smithi main rhel 8.6 rados/standalone/{supported-random-distro$/{rhel_8} workloads/osd} 1
Failure Reason:

Command failed (workunit test osd/divergent-priors.sh) on smithi105 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=e1535bff13ef9f910f1d4cb360069ee00dc3b970 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/osd/divergent-priors.sh'

fail 7199870 2023-03-09 17:13:21 2023-03-12 07:10:33 2023-03-12 08:19:56 1:09:23 1:02:17 0:07:06 smithi main rhel 8.6 rados/singleton/{all/test_envlibrados_for_rocksdb/{supported/rhel_latest test_envlibrados_for_rocksdb} mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8}} 1
Failure Reason:

Command failed on smithi072 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7199871 2023-03-09 17:13:23 2023-03-12 07:10:34 2023-03-12 07:29:32 0:18:58 0:12:02 0:06:56 smithi main rhel 8.6 rados/cephadm/smoke-singlehost/{0-random-distro$/{rhel_8.6_container_tools_3.0} 1-start 2-services/rgw 3-final} 1
Failure Reason:

Command failed on smithi062 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:e1535bff13ef9f910f1d4cb360069ee00dc3b970 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f15fdeec-c0a6-11ed-9af2-001a4aab830c -- ceph orch daemon add osd smithi062:vg_nvme/lv_4'

dead 7199872 2023-03-09 17:13:24 2023-03-12 07:10:34 2023-03-12 19:19:47 12:09:13 smithi main ubuntu 20.04 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-dispatch-delay msgr/async objectstore/filestore-xfs rados supported-random-distro$/{ubuntu_latest} thrashers/mapgap thrashosds-health workloads/cache-pool-snaps} 2
Failure Reason:

hit max job timeout

fail 7199873 2023-03-09 17:13:25 2023-03-12 07:10:35 2023-03-12 08:19:52 1:09:17 1:02:46 0:06:31 smithi main rhel 8.6 rados/singleton-nomsgr/{all/librados_hello_world mon_election/classic rados supported-random-distro$/{rhel_8}} 1
Failure Reason:

Command failed on smithi012 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7199874 2023-03-09 17:13:26 2023-03-12 07:10:45 2023-03-12 08:22:06 1:11:21 1:00:54 0:10:27 smithi main centos 8.stream rados/singleton/{all/admin-socket mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8}} 1
Failure Reason:

Command failed on smithi184 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7199875 2023-03-09 17:13:27 2023-03-12 07:10:45 2023-03-12 07:38:32 0:27:47 0:19:54 0:07:53 smithi main rhel 8.6 rados/cephadm/osds/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-ops/rm-zap-flag} 2
Failure Reason:

reached maximum tries (120) after waiting for 120 seconds

fail 7199876 2023-03-09 17:13:28 2023-03-12 07:10:46 2023-03-12 08:20:47 1:10:01 1:03:14 0:06:47 smithi main rhel 8.6 rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/osd-delay objectstore/bluestore-comp-zstd rados recovery-overrides/{more-active-recovery} supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=lrc-k=4-m=2-l=3} 3
Failure Reason:

Command failed on smithi033 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7199877 2023-03-09 17:13:29 2023-03-12 07:10:56 2023-03-12 08:21:33 1:10:37 1:03:07 0:07:30 smithi main rhel 8.6 rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/osd-delay objectstore/bluestore-comp-zstd rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} 2
Failure Reason:

Command failed on smithi038 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

pass 7199878 2023-03-09 17:13:31 2023-03-12 07:11:17 2023-03-12 10:01:52 2:50:35 2:20:07 0:30:28 smithi main centos 8.stream rados/objectstore/{backends/objectstore-bluestore-b supported-random-distro$/{centos_8}} 1
fail 7199879 2023-03-09 17:13:32 2023-03-12 07:11:17 2023-03-12 08:33:39 1:22:22 1:10:54 0:11:28 smithi main centos 8.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-zlib rados tasks/rados_cls_all validater/valgrind} 2
Failure Reason:

Command failed on smithi008 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7199880 2023-03-09 17:13:33 2023-03-12 07:11:18 2023-03-12 08:20:08 1:08:50 0:58:01 0:10:49 smithi main ubuntu 20.04 rados/perf/{ceph mon_election/classic objectstore/bluestore-comp openstack scheduler/wpq_default_shards settings/optimized ubuntu_latest workloads/sample_radosbench} 1
Failure Reason:

Command failed on smithi100 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7199881 2023-03-09 17:13:34 2023-03-12 07:11:18 2023-03-12 08:24:56 1:13:38 0:58:47 0:14:51 smithi main ubuntu 20.04 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{default} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_latest} thrashers/morepggrow thrashosds-health workloads/cache-snaps-balanced} 2
Failure Reason:

Command failed on smithi029 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7199882 2023-03-09 17:13:35 2023-03-12 07:11:19 2023-03-12 08:19:49 1:08:30 1:00:15 0:08:15 smithi main rhel 8.6 rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{rhel_8} tasks/libcephsqlite} 2
Failure Reason:

Command failed on smithi101 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

pass 7199883 2023-03-09 17:13:36 2023-03-12 07:11:59 2023-03-12 07:47:47 0:35:48 0:29:38 0:06:10 smithi main rhel 8.6 rados/multimon/{clusters/21 mon_election/connectivity msgr-failures/many msgr/async no_pools objectstore/bluestore-hybrid rados supported-random-distro$/{rhel_8} tasks/mon_recovery} 3
pass 7199884 2023-03-09 17:13:37 2023-03-12 07:12:00 2023-03-12 07:40:33 0:28:33 0:20:39 0:07:54 smithi main rhel 8.6 rados/singleton-nomsgr/{all/msgr mon_election/connectivity rados supported-random-distro$/{rhel_8}} 1
fail 7199885 2023-03-09 17:13:38 2023-03-12 07:12:20 2023-03-12 08:21:10 1:08:50 0:58:20 0:10:30 smithi main ubuntu 20.04 rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/few rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/fastread thrashosds-health workloads/ec-small-objects-fast-read-overwrites} 2
Failure Reason:

Command failed on smithi003 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7199886 2023-03-09 17:13:40 2023-03-12 07:12:21 2023-03-12 07:42:20 0:29:59 0:16:07 0:13:52 smithi main ubuntu 20.04 rados/cephadm/workunits/{0-distro/ubuntu_20.04 agent/on mon_election/classic task/test_orch_cli_mon} 5
Failure Reason:

Command failed on smithi019 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:e1535bff13ef9f910f1d4cb360069ee00dc3b970 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 3c52f0c8-c0a8-11ed-9af2-001a4aab830c -- ceph orch daemon add osd smithi019:vg_nvme/lv_4'

fail 7199887 2023-03-09 17:13:41 2023-03-12 07:12:51 2023-03-12 08:23:18 1:10:27 1:02:16 0:08:11 smithi main rhel 8.6 rados/singleton/{all/backfill-toofull mon_election/classic msgr-failures/none msgr/async-v2only objectstore/bluestore-comp-zlib rados supported-random-distro$/{rhel_8}} 1
Failure Reason:

Command failed on smithi130 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7199888 2023-03-09 17:13:42 2023-03-12 07:13:52 2023-03-12 08:26:48 1:12:56 1:02:06 0:10:50 smithi main centos 8.stream rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-comp-lz4 rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4
Failure Reason:

Command failed on smithi002 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7199889 2023-03-09 17:13:43 2023-03-12 07:14:23 2023-03-12 08:25:15 1:10:52 0:58:54 0:11:58 smithi main ubuntu 20.04 rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/disable mon_election/classic random-objectstore$/{bluestore-comp-lz4} supported-random-distro$/{ubuntu_latest} tasks/failover} 2
Failure Reason:

Command failed on smithi037 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7199890 2023-03-09 17:13:44 2023-03-12 07:14:23 2023-03-12 07:37:06 0:22:43 0:11:55 0:10:48 smithi main centos 8.stream rados/cephadm/smoke/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop agent/off fixed-2 mon_election/connectivity start} 2
Failure Reason:

Command failed on smithi097 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:e1535bff13ef9f910f1d4cb360069ee00dc3b970 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 370700fa-c0a8-11ed-9af2-001a4aab830c -- ceph orch daemon add osd smithi097:/dev/nvme4n1'

fail 7199891 2023-03-09 17:13:45 2023-03-12 07:15:34 2023-03-12 08:26:06 1:10:32 0:58:48 0:11:44 smithi main ubuntu 20.04 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{default} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{ubuntu_latest} thrashers/none thrashosds-health workloads/cache-snaps} 2
Failure Reason:

Command failed on smithi116 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7199892 2023-03-09 17:13:46 2023-03-12 07:15:35 2023-03-12 08:26:40 1:11:05 0:58:40 0:12:25 smithi main ubuntu 20.04 rados/monthrash/{ceph clusters/9-mons mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{ubuntu_latest} thrashers/sync-many workloads/pool-create-delete} 2
Failure Reason:

Command failed on smithi084 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7199893 2023-03-09 17:13:48 2023-03-12 07:15:55 2023-03-12 08:26:56 1:11:01 0:58:56 0:12:05 smithi main ubuntu 20.04 rados/singleton-nomsgr/{all/multi-backfill-reject mon_election/classic rados supported-random-distro$/{ubuntu_latest}} 2
Failure Reason:

Command failed on smithi050 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7199894 2023-03-09 17:13:49 2023-03-12 07:15:56 2023-03-12 08:25:52 1:09:56 1:01:55 0:08:01 smithi main rhel 8.6 rados/singleton/{all/deduptool mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-zstd rados supported-random-distro$/{rhel_8}} 1
Failure Reason:

Command failed on smithi174 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7199895 2023-03-09 17:13:50 2023-03-12 07:15:56 2023-03-12 08:28:32 1:12:36 1:01:48 0:10:48 smithi main centos 8.stream rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-comp-snappy rados recovery-overrides/{default} supported-random-distro$/{centos_8} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=jerasure-k=2-m=1} 2
Failure Reason:

Command failed on smithi061 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7199896 2023-03-09 17:13:51 2023-03-12 07:16:57 2023-03-12 07:59:46 0:42:49 0:32:16 0:10:33 smithi main ubuntu 20.04 rados/cephadm/osds/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-ops/rm-zap-wait} 2
Failure Reason:

reached maximum tries (120) after waiting for 120 seconds

fail 7199897 2023-03-09 17:13:52 2023-03-12 07:16:57 2023-03-12 08:29:31 1:12:34 1:01:40 0:10:54 smithi main centos 8.stream rados/singleton/{all/divergent_priors mon_election/classic msgr-failures/many msgr/async-v1only objectstore/bluestore-hybrid rados supported-random-distro$/{centos_8}} 1
Failure Reason:

Command failed on smithi089 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7199898 2023-03-09 17:13:53 2023-03-12 07:17:38 2023-03-12 07:43:13 0:25:35 0:15:28 0:10:07 smithi main centos 8.stream rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus-v1only backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/crush-compat mon_election/classic msgr-failures/few rados thrashers/none thrashosds-health workloads/rbd_cls} 3
Failure Reason:

Command failed on smithi032 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:e1535bff13ef9f910f1d4cb360069ee00dc3b970 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f23edc76-c0a8-11ed-9af2-001a4aab830c -- ceph orch daemon add osd smithi032:vg_nvme/lv_4'

fail 7199899 2023-03-09 17:13:54 2023-03-12 07:17:38 2023-03-12 08:29:01 1:11:23 0:58:11 0:13:12 smithi main ubuntu 20.04 rados/singleton-nomsgr/{all/osd_stale_reads mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} 1
Failure Reason:

Command failed on smithi039 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7199900 2023-03-09 17:13:56 2023-03-12 07:18:09 2023-03-12 07:40:33 0:22:24 0:11:00 0:11:24 smithi main ubuntu 20.04 rados/standalone/{supported-random-distro$/{ubuntu_latest} workloads/scrub} 1
Failure Reason:

Command failed (workunit test scrub/osd-mapper.sh) on smithi040 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=e1535bff13ef9f910f1d4cb360069ee00dc3b970 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/scrub/osd-mapper.sh'

fail 7199901 2023-03-09 17:13:57 2023-03-12 07:18:09 2023-03-12 08:28:48 1:10:39 0:58:41 0:11:58 smithi main ubuntu 20.04 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{default} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/cache} 2
Failure Reason:

Command failed on smithi159 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7199902 2023-03-09 17:13:58 2023-03-12 07:18:10 2023-03-12 08:26:47 1:08:37 0:58:12 0:10:25 smithi main ubuntu 20.04 rados/perf/{ceph mon_election/connectivity objectstore/bluestore-low-osd-mem-target openstack scheduler/dmclock_1Shard_16Threads settings/optimized ubuntu_latest workloads/fio_4K_rand_read} 1
Failure Reason:

Command failed on smithi190 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7199903 2023-03-09 17:13:59 2023-03-12 07:18:30 2023-03-12 08:28:12 1:09:42 1:03:04 0:06:38 smithi main rhel 8.6 rados/singleton-bluestore/{all/cephtool mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8}} 1
Failure Reason:

Command failed on smithi007 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

pass 7199904 2023-03-09 17:14:00 2023-03-12 07:18:31 2023-03-12 07:43:58 0:25:27 0:14:51 0:10:36 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools agent/off mon_election/connectivity task/test_adoption} 1
fail 7199905 2023-03-09 17:14:01 2023-03-12 07:18:31 2023-03-12 08:28:42 1:10:11 1:00:13 0:09:58 smithi main centos 8.stream rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8} tasks/rados_api_tests} 2
Failure Reason:

Command failed on smithi145 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

pass 7199906 2023-03-09 17:14:02 2023-03-12 07:19:12 2023-03-12 08:02:57 0:43:45 0:33:30 0:10:15 smithi main centos 8.stream rados/objectstore/{backends/objectstore-filestore-memstore supported-random-distro$/{centos_8}} 1
fail 7199907 2023-03-09 17:14:04 2023-03-12 07:19:12 2023-03-12 08:28:25 1:09:13 0:58:09 0:11:04 smithi main ubuntu 20.04 rados/singleton/{all/divergent_priors2 mon_election/connectivity msgr-failures/none msgr/async-v2only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{ubuntu_latest}} 1
Failure Reason:

Command failed on smithi178 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7199908 2023-03-09 17:14:05 2023-03-12 07:19:13 2023-03-12 08:28:53 1:09:40 1:02:35 0:07:05 smithi main rhel 8.6 rados/singleton-nomsgr/{all/pool-access mon_election/classic rados supported-random-distro$/{rhel_8}} 1
Failure Reason:

Command failed on smithi115 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7199909 2023-03-09 17:14:06 2023-03-12 07:19:13 2023-03-12 08:30:34 1:11:21 0:59:14 0:12:07 smithi main ubuntu 20.04 rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/osd-dispatch-delay objectstore/bluestore-hybrid rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} 3
Failure Reason:

Command failed on smithi138 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7199910 2023-03-09 17:14:07 2023-03-12 07:20:14 2023-03-12 08:30:52 1:10:38 0:58:46 0:11:52 smithi main ubuntu 20.04 rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/osd-dispatch-delay objectstore/bluestore-hybrid rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} 2
Failure Reason:

Command failed on smithi112 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7199911 2023-03-09 17:14:08 2023-03-12 07:20:25 2023-03-12 08:31:47 1:11:22 1:01:15 0:10:07 smithi main centos 8.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-zstd rados tasks/mon_recovery validater/lockdep} 2
Failure Reason:

Command failed on smithi144 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7199912 2023-03-09 17:14:09 2023-03-12 07:20:26 2023-03-12 07:40:47 0:20:21 0:12:32 0:07:49 smithi main rhel 8.6 rados/cephadm/smoke/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop agent/on fixed-2 mon_election/classic start} 2
Failure Reason:

Command failed on smithi026 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:e1535bff13ef9f910f1d4cb360069ee00dc3b970 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 942c0adc-c0a8-11ed-9af2-001a4aab830c -- ceph orch daemon add osd smithi026:/dev/nvme4n1'

fail 7199913 2023-03-09 17:14:10 2023-03-12 07:20:36 2023-03-12 08:30:12 1:09:36 1:02:52 0:06:44 smithi main rhel 8.6 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-dispatch-delay msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/dedup-io-mixed} 2
Failure Reason:

Command failed on smithi133 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

pass 7199914 2023-03-09 17:14:11 2023-03-12 07:20:37 2023-03-12 07:42:37 0:22:00 0:09:37 0:12:23 smithi main ubuntu 20.04 rados/multimon/{clusters/3 mon_election/classic msgr-failures/few msgr/async-v1only no_pools objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{ubuntu_latest} tasks/mon_clock_no_skews} 2
fail 7199915 2023-03-09 17:14:12 2023-03-12 07:21:47 2023-03-12 08:31:16 1:09:29 1:02:16 0:07:13 smithi main rhel 8.6 rados/singleton/{all/dump-stuck mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-stupid rados supported-random-distro$/{rhel_8}} 1
Failure Reason:

Command failed on smithi119 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7199916 2023-03-09 17:14:13 2023-03-12 07:21:48 2023-03-12 08:32:55 1:11:07 0:59:21 0:11:46 smithi main ubuntu 20.04 rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/fastclose objectstore/bluestore-comp-snappy rados recovery-overrides/{more-async-recovery} supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4
Failure Reason:

Command failed on smithi088 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7199917 2023-03-09 17:14:15 2023-03-12 07:21:49 2023-03-12 07:51:37 0:29:48 0:19:24 0:10:24 smithi main centos 8.stream rados/cephadm/osds/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-ops/rmdir-reactivate} 2
Failure Reason:

reached maximum tries (120) after waiting for 120 seconds

pass 7199918 2023-03-09 17:14:16 2023-03-12 07:21:49 2023-03-12 08:05:54 0:44:05 0:33:20 0:10:45 smithi main centos 8.stream rados/singleton-nomsgr/{all/recovery-unfound-found mon_election/connectivity rados supported-random-distro$/{centos_8}} 1
fail 7199919 2023-03-09 17:14:17 2023-03-12 07:22:50 2023-03-12 08:34:26 1:11:36 0:58:14 0:13:22 smithi main ubuntu 20.04 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{default} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/dedup-io-snaps} 2
Failure Reason:

Command failed on smithi016 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7199920 2023-03-09 17:14:18 2023-03-12 07:22:50 2023-03-12 08:34:48 1:11:58 1:01:13 0:10:45 smithi main centos 8.stream rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/mon-delay msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{centos_8} thrashers/sync workloads/rados_5925} 2
Failure Reason:

Command failed on smithi114 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7199921 2023-03-09 17:14:19 2023-03-12 07:23:01 2023-03-12 07:55:07 0:32:06 0:20:22 0:11:44 smithi main centos 8.stream rados/singleton/{all/ec-inconsistent-hinfo mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/filestore-xfs rados supported-random-distro$/{centos_8}} 1
Failure Reason:

Command failed on smithi156 with status 6: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph tell osd.1 flush_pg_stats'

fail 7199922 2023-03-09 17:14:20 2023-03-12 07:23:01 2023-03-12 08:32:02 1:09:01 0:58:23 0:10:38 smithi main ubuntu 20.04 rados/perf/{ceph mon_election/classic objectstore/bluestore-stupid openstack scheduler/dmclock_default_shards settings/optimized ubuntu_latest workloads/fio_4K_rand_rw} 1
Failure Reason:

Command failed on smithi135 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7199923 2023-03-09 17:14:21 2023-03-12 07:23:02 2023-03-12 08:35:43 1:12:41 1:01:23 0:11:18 smithi main centos 8.stream rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/enable mon_election/connectivity random-objectstore$/{bluestore-bitmap} supported-random-distro$/{centos_8} tasks/insights} 2
Failure Reason:

Command failed on smithi031 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

pass 7199924 2023-03-09 17:14:22 2023-03-12 07:23:22 2023-03-12 07:51:35 0:28:13 0:18:21 0:09:52 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools_crun agent/on mon_election/classic task/test_cephadm} 1
fail 7199925 2023-03-09 17:14:24 2023-03-12 07:23:22 2023-03-12 08:33:34 1:10:12 1:02:40 0:07:32 smithi main rhel 8.6 rados/singleton-nomsgr/{all/version-number-sanity mon_election/classic rados supported-random-distro$/{rhel_8}} 1
Failure Reason:

Command failed on smithi053 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7199926 2023-03-09 17:14:25 2023-03-12 07:23:23 2023-03-12 08:33:25 1:10:02 1:02:49 0:07:13 smithi main rhel 8.6 rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/fastclose objectstore/bluestore-comp-zlib rados recovery-overrides/{default} supported-random-distro$/{rhel_8} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=jerasure-k=3-m=1} 2
Failure Reason:

Command failed on smithi073 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7199927 2023-03-09 17:14:26 2023-03-12 07:23:23 2023-03-12 08:33:17 1:09:54 0:58:30 0:11:24 smithi main ubuntu 20.04 rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/osd-delay rados recovery-overrides/{more-active-recovery} supported-random-distro$/{ubuntu_latest} thrashers/minsize_recovery thrashosds-health workloads/ec-small-objects-overwrites} 2
Failure Reason:

Command failed on smithi082 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7199928 2023-03-09 17:14:27 2023-03-12 07:23:24 2023-03-12 08:33:50 1:10:26 0:58:30 0:11:56 smithi main ubuntu 20.04 rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{ubuntu_latest} tasks/rados_cls_all} 2
Failure Reason:

Command failed on smithi018 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7199929 2023-03-09 17:14:28 2023-03-12 07:24:04 2023-03-12 08:34:46 1:10:42 1:00:35 0:10:07 smithi main centos 8.stream rados/singleton/{all/ec-lost-unfound mon_election/classic msgr-failures/none msgr/async-v2only objectstore/bluestore-bitmap rados supported-random-distro$/{centos_8}} 1
Failure Reason:

Command failed on smithi081 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7199930 2023-03-09 17:14:29 2023-03-12 07:24:05 2023-03-12 07:44:46 0:20:41 0:12:30 0:08:11 smithi main rhel 8.6 rados/cephadm/smoke/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop agent/off fixed-2 mon_election/connectivity start} 2
Failure Reason:

Command failed on smithi063 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:e1535bff13ef9f910f1d4cb360069ee00dc3b970 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 24355048-c0a9-11ed-9af2-001a4aab830c -- ceph orch daemon add osd smithi063:/dev/nvme4n1'

fail 7199931 2023-03-09 17:14:31 2023-03-12 07:24:05 2023-03-12 08:37:52 1:13:47 1:00:49 0:12:58 smithi main centos 8.stream rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{default} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{centos_8} thrashers/mapgap thrashosds-health workloads/pool-snaps-few-objects} 2
Failure Reason:

Command failed on smithi191 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7199932 2023-03-09 17:14:32 2023-03-12 07:26:47 2023-03-12 07:42:42 0:15:55 0:06:29 0:09:26 smithi main ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/radosbench cluster/1-node k8s/1.21 net/flannel rook/1.7.2} 1
Failure Reason:

Command failed on smithi102 with status 1: 'sudo systemctl enable --now kubelet && sudo kubeadm config images pull'

fail 7199933 2023-03-09 17:14:33 2023-03-12 07:26:47 2023-03-12 08:05:56 0:39:09 0:28:48 0:10:21 smithi main centos 8.stream rados/dashboard/{0-single-container-host debug/mgr mon_election/classic random-objectstore$/{filestore-xfs} tasks/dashboard} 2
Failure Reason:

Test failure: setUpClass (tasks.mgr.dashboard.test_cephfs.CephfsTest)

fail 7199934 2023-03-09 17:14:34 2023-03-12 07:26:58 2023-03-12 10:54:04 3:27:06 3:13:35 0:13:31 smithi main centos 8.stream rados/objectstore/{backends/alloc-hint supported-random-distro$/{centos_8}} 1
Failure Reason:

Command failed (workunit test rados/test_alloc_hint.sh) on smithi049 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=e1535bff13ef9f910f1d4cb360069ee00dc3b970 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_alloc_hint.sh'

fail 7199935 2023-03-09 17:14:35 2023-03-12 07:26:58 2023-03-12 08:36:14 1:09:16 1:02:29 0:06:47 smithi main rhel 8.6 rados/rest/{mgr-restful supported-random-distro$/{rhel_8}} 1
Failure Reason:

Command failed on smithi067 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7199936 2023-03-09 17:14:36 2023-03-12 07:26:59 2023-03-12 08:35:58 1:08:59 1:01:57 0:07:02 smithi main rhel 8.6 rados/singleton-nomsgr/{all/admin_socket_output mon_election/classic rados supported-random-distro$/{rhel_8}} 1
Failure Reason:

Command failed on smithi117 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

pass 7199937 2023-03-09 17:14:37 2023-03-12 07:26:59 2023-03-12 07:52:30 0:25:31 0:16:35 0:08:56 smithi main centos 8.stream rados/standalone/{supported-random-distro$/{centos_8} workloads/c2c} 1
fail 7199938 2023-03-09 17:14:38 2023-03-12 07:27:20 2023-03-12 08:14:49 0:47:29 0:40:30 0:06:59 smithi main rhel 8.6 rados/upgrade/parallel/{0-random-distro$/{rhel_8.6_container_tools_rhel8} 0-start 1-tasks mon_election/classic upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} 2
Failure Reason:

Command failed (workunit test cls/test_cls_rbd.sh) on smithi086 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=quincy TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_rbd.sh'

fail 7199939 2023-03-09 17:14:40 2023-03-12 07:27:20 2023-03-12 08:47:06 1:19:46 1:09:15 0:10:31 smithi main centos 8.stream rados/valgrind-leaks/{1-start 2-inject-leak/mon centos_latest} 1
Failure Reason:

expected valgrind issues and found none

pass 7199940 2023-03-09 17:14:41 2023-03-12 07:27:21 2023-03-12 07:46:10 0:18:49 0:10:32 0:08:17 smithi main rhel 8.6 rados/cephadm/workunits/{0-distro/rhel_8.6_container_tools_3.0 agent/off mon_election/connectivity task/test_cephadm_repos} 1
pass 7199941 2023-03-09 17:14:42 2023-03-12 07:29:42 2023-03-12 07:53:31 0:23:49 0:08:39 0:15:10 smithi main ubuntu 20.04 rados/singleton/{all/erasure-code-nonregression mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-lz4 rados supported-random-distro$/{ubuntu_latest}} 1
fail 7199942 2023-03-09 17:14:43 2023-03-12 07:35:44 2023-03-12 08:50:34 1:14:50 1:00:38 0:14:12 smithi main centos 8.stream rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/fastclose objectstore/bluestore-low-osd-mem-target rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{centos_8} thrashers/fastread thrashosds-health workloads/ec-rados-plugin=lrc-k=4-m=2-l=3} 3
Failure Reason:

Command failed on smithi090 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7199943 2023-03-09 17:14:44 2023-03-12 07:38:35 2023-03-12 08:49:58 1:11:23 1:01:45 0:09:38 smithi main rhel 8.6 rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/fastclose objectstore/bluestore-low-osd-mem-target rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{rhel_8} thrashers/mapgap thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} 2
Failure Reason:

Command failed on smithi040 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7199944 2023-03-09 17:14:45 2023-03-12 07:40:36 2023-03-12 08:59:57 1:19:21 1:09:47 0:09:34 smithi main centos 8.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-hybrid rados tasks/rados_api_tests validater/valgrind} 2
Failure Reason:

Command failed on smithi026 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7199945 2023-03-09 17:14:46 2023-03-12 07:40:56 2023-03-12 08:54:26 1:13:30 1:01:07 0:12:23 smithi main centos 8.stream rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_8} thrashers/morepggrow thrashosds-health workloads/rados_api_tests} 2
Failure Reason:

Command failed on smithi023 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7199946 2023-03-09 17:14:48 2023-03-12 07:42:17 2023-03-12 08:49:36 1:07:19 1:01:18 0:06:01 smithi main rhel 8.6 rados/singleton-nomsgr/{all/balancer mon_election/connectivity rados supported-random-distro$/{rhel_8}} 1
Failure Reason:

Command failed on smithi148 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7199947 2023-03-09 17:14:49 2023-03-12 07:42:18 2023-03-12 08:08:03 0:25:45 0:14:34 0:11:11 smithi main centos 8.stream rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/3-size-2-min-size 1-install/nautilus-v2only backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/on mon_election/connectivity msgr-failures/osd-delay rados thrashers/pggrow thrashosds-health workloads/snaps-few-objects} 3
Failure Reason:

Command failed on smithi019 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:e1535bff13ef9f910f1d4cb360069ee00dc3b970 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 5185874a-c0ac-11ed-9af2-001a4aab830c -- ceph orch daemon add osd smithi019:vg_nvme/lv_4'

pass 7199948 2023-03-09 17:14:50 2023-03-12 07:42:28 2023-03-12 08:00:35 0:18:07 0:08:33 0:09:34 smithi main ubuntu 20.04 rados/multimon/{clusters/6 mon_election/connectivity msgr-failures/many msgr/async-v2only no_pools objectstore/bluestore-stupid rados supported-random-distro$/{ubuntu_latest} tasks/mon_clock_with_skews} 2
fail 7199949 2023-03-09 17:14:51 2023-03-12 07:42:28 2023-03-12 08:12:10 0:29:42 0:20:29 0:09:13 smithi main centos 8.stream rados/cephadm/osds/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-ops/repave-all} 2
Failure Reason:

reached maximum tries (120) after waiting for 120 seconds

fail 7199950 2023-03-09 17:14:53 2023-03-12 07:42:39 2023-03-12 08:50:38 1:07:59 0:57:33 0:10:26 smithi main ubuntu 20.04 rados/perf/{ceph mon_election/connectivity objectstore/bluestore-basic-min-osd-mem-target openstack scheduler/wpq_default_shards settings/optimized ubuntu_latest workloads/fio_4M_rand_read} 1
Failure Reason:

Command failed on smithi121 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7199951 2023-03-09 17:14:54 2023-03-12 07:42:39 2023-03-12 08:50:29 1:07:50 1:01:28 0:06:22 smithi main rhel 8.6 rados/singleton/{all/lost-unfound-delete mon_election/classic msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-snappy rados supported-random-distro$/{rhel_8}} 1
Failure Reason:

Command failed on smithi102 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7199952 2023-03-09 17:14:55 2023-03-12 07:42:49 2023-03-12 08:53:16 1:10:27 1:02:18 0:08:09 smithi main rhel 8.6 rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/connectivity msgr-failures/few objectstore/bluestore-comp-zlib rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{rhel_8} thrashers/default thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4
Failure Reason:

Command failed on smithi006 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7199953 2023-03-09 17:14:56 2023-03-12 07:44:00 2023-03-12 08:52:47 1:08:47 0:57:48 0:10:59 smithi main ubuntu 20.04 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-dispatch-delay msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{ubuntu_latest} thrashers/none thrashosds-health workloads/radosbench-high-concurrency} 2
Failure Reason:

Command failed on smithi063 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7199954 2023-03-09 17:14:57 2023-03-12 07:44:50 2023-03-12 08:09:31 0:24:41 0:12:36 0:12:05 smithi main centos 8.stream rados/cephadm/smoke-singlehost/{0-random-distro$/{centos_8.stream_container_tools_crun} 1-start 2-services/basic 3-final} 1
Failure Reason:

Command failed on smithi062 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:e1535bff13ef9f910f1d4cb360069ee00dc3b970 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 5f1dd614-c0ac-11ed-9af2-001a4aab830c -- ceph orch daemon add osd smithi062:vg_nvme/lv_4'

fail 7199955 2023-03-09 17:14:58 2023-03-12 07:46:21 2023-03-12 08:55:45 1:09:24 0:57:39 0:11:45 smithi main ubuntu 20.04 rados/singleton-nomsgr/{all/cache-fs-trunc mon_election/classic rados supported-random-distro$/{ubuntu_latest}} 1
Failure Reason:

Command failed on smithi123 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7199956 2023-03-09 17:15:00 2023-03-12 07:47:52 2023-03-12 08:57:13 1:09:21 0:59:23 0:09:58 smithi main centos 8.stream rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_8} tasks/rados_python} 2
Failure Reason:

Command failed on smithi165 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7199957 2023-03-09 17:15:01 2023-03-12 07:47:52 2023-03-12 09:00:52 1:13:00 1:02:04 0:10:56 smithi main rhel 8.6 rados/monthrash/{ceph clusters/9-mons mon_election/connectivity msgr-failures/few msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{rhel_8} thrashers/force-sync-many workloads/rados_api_tests} 2
Failure Reason:

Command failed on smithi005 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7199958 2023-03-09 17:15:02 2023-03-12 07:51:43 2023-03-12 08:59:44 1:08:01 0:57:24 0:10:37 smithi main ubuntu 20.04 rados/singleton/{all/lost-unfound mon_election/connectivity msgr-failures/none msgr/async-v2only objectstore/bluestore-comp-zlib rados supported-random-distro$/{ubuntu_latest}} 1
Failure Reason:

Command failed on smithi129 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7199959 2023-03-09 17:15:03 2023-03-12 07:51:44 2023-03-12 09:00:43 1:08:59 1:01:31 0:07:28 smithi main rhel 8.6 rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/disable mon_election/classic random-objectstore$/{bluestore-comp-snappy} supported-random-distro$/{rhel_8} tasks/module_selftest} 2
Failure Reason:

Command failed on smithi078 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7199960 2023-03-09 17:15:04 2023-03-12 07:53:34 2023-03-12 08:18:26 0:24:52 0:13:45 0:11:07 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/rhel_8.6_container_tools_rhel8 agent/on mon_election/classic task/test_iscsi_pids_limit/{centos_8.stream_container_tools test_iscsi_pids_limit}} 1
Failure Reason:

Command failed on smithi156 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:e1535bff13ef9f910f1d4cb360069ee00dc3b970 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 03b21158-c0ae-11ed-9af2-001a4aab830c -- ceph orch daemon add osd smithi156:vg_nvme/lv_4'

fail 7199961 2023-03-09 17:15:05 2023-03-12 07:55:15 2023-03-12 09:11:07 1:15:52 1:00:33 0:15:19 smithi main centos 8.stream rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/few objectstore/bluestore-comp-zstd rados recovery-overrides/{more-active-recovery} supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/ec-radosbench} 2
Failure Reason:

Command failed on smithi099 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7199962 2023-03-09 17:15:07 2023-03-12 07:59:56 2023-03-12 09:09:50 1:09:54 1:00:31 0:09:23 smithi main centos 8.stream rados/singleton-nomsgr/{all/ceph-kvstore-tool mon_election/connectivity rados supported-random-distro$/{centos_8}} 1
Failure Reason:

Command failed on smithi131 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7199963 2023-03-09 17:15:08 2023-03-12 08:00:37 2023-03-12 09:20:39 1:20:02 1:07:14 0:12:48 smithi main centos 8.stream rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async objectstore/filestore-xfs rados supported-random-distro$/{centos_8} thrashers/pggrow thrashosds-health workloads/radosbench} 2
Failure Reason:

reached maximum tries (500) after waiting for 3000 seconds

fail 7199964 2023-03-09 17:15:09 2023-03-12 08:03:08 2023-03-12 09:13:52 1:10:44 0:57:20 0:13:24 smithi main ubuntu 20.04 rados/singleton/{all/max-pg-per-osd.from-mon mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-zstd rados supported-random-distro$/{ubuntu_latest}} 1
Failure Reason:

Command failed on smithi047 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7199965 2023-03-09 17:15:10 2023-03-12 08:05:58 2023-03-12 08:51:44 0:45:46 0:35:39 0:10:07 smithi main ubuntu 20.04 rados/objectstore/{backends/ceph_objectstore_tool supported-random-distro$/{ubuntu_latest}} 1
Failure Reason:

Command crashed: 'sudo ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 --journal-path /var/lib/ceph/osd/ceph-0/journal --force --op remove --pgid 3.b'

fail 7199966 2023-03-09 17:15:11 2023-03-12 08:05:59 2023-03-12 08:30:30 0:24:31 0:10:13 0:14:18 smithi main ubuntu 20.04 rados/cephadm/smoke/{0-distro/ubuntu_20.04 0-nvme-loop agent/on fixed-2 mon_election/classic start} 2
Failure Reason:

Command failed on smithi157 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:e1535bff13ef9f910f1d4cb360069ee00dc3b970 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 32473970-c0af-11ed-9af2-001a4aab830c -- ceph orch daemon add osd smithi157:/dev/nvme4n1'

fail 7199967 2023-03-09 17:15:13 2023-03-12 08:08:09 2023-03-12 09:15:55 1:07:46 0:57:27 0:10:19 smithi main ubuntu 20.04 rados/perf/{ceph mon_election/classic objectstore/bluestore-bitmap openstack scheduler/dmclock_1Shard_16Threads settings/optimized ubuntu_latest workloads/fio_4M_rand_rw} 1
Failure Reason:

Command failed on smithi019 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7199968 2023-03-09 17:15:14 2023-03-12 08:08:10 2023-03-12 09:17:01 1:08:51 1:00:09 0:08:42 smithi main centos 8.stream rados/singleton/{all/max-pg-per-osd.from-primary mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-hybrid rados supported-random-distro$/{centos_8}} 1
Failure Reason:

Command failed on smithi195 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

pass 7199969 2023-03-09 17:15:15 2023-03-12 08:08:10 2023-03-12 08:30:33 0:22:23 0:11:11 0:11:12 smithi main centos 8.stream rados/singleton-nomsgr/{all/ceph-post-file mon_election/classic rados supported-random-distro$/{centos_8}} 1
pass 7199970 2023-03-09 17:15:16 2023-03-12 08:09:41 2023-03-12 08:44:43 0:35:02 0:21:05 0:13:57 smithi main ubuntu 20.04 rados/standalone/{supported-random-distro$/{ubuntu_latest} workloads/crush} 1
fail 7199971 2023-03-09 17:15:17 2023-03-12 08:12:01 2023-03-12 09:20:03 1:08:02 0:57:39 0:10:23 smithi main ubuntu 20.04 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{default} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/redirect} 2
Failure Reason:

Command failed on smithi042 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7199972 2023-03-09 17:15:18 2023-03-12 08:12:12 2023-03-12 09:23:17 1:11:05 1:00:24 0:10:41 smithi main centos 8.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-low-osd-mem-target rados tasks/rados_cls_all validater/lockdep} 2
Failure Reason:

Command failed on smithi052 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7199973 2023-03-09 17:15:19 2023-03-12 08:12:12 2023-03-12 09:27:08 1:14:56 0:57:57 0:16:59 smithi main ubuntu 20.04 rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/few objectstore/bluestore-stupid rados recovery-overrides/{default} supported-random-distro$/{ubuntu_latest} thrashers/mapgap thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} 3
Failure Reason:

Command failed on smithi086 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7199974 2023-03-09 17:15:21 2023-03-12 08:18:34 2023-03-12 09:28:52 1:10:18 1:01:47 0:08:31 smithi main rhel 8.6 rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few objectstore/bluestore-stupid rados recovery-overrides/{more-async-recovery} supported-random-distro$/{rhel_8} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} 2
Failure Reason:

Command failed on smithi012 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7199975 2023-03-09 17:15:22 2023-03-12 08:19:54 2023-03-12 08:47:44 0:27:50 0:17:54 0:09:56 smithi main centos 8.stream rados/cephadm/osds/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop 1-start 2-ops/rm-zap-add} 2
Failure Reason:

reached maximum tries (120) after waiting for 120 seconds

fail 7199976 2023-03-09 17:15:23 2023-03-12 08:20:04 2023-03-12 09:30:17 1:10:13 1:01:38 0:08:35 smithi main rhel 8.6 rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/osd-dispatch-delay rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{rhel_8} thrashers/morepggrow thrashosds-health workloads/ec-snaps-few-objects-overwrites} 2
Failure Reason:

Command failed on smithi083 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

pass 7199977 2023-03-09 17:15:24 2023-03-12 08:20:55 2023-03-12 08:46:16 0:25:21 0:19:28 0:05:53 smithi main rhel 8.6 rados/multimon/{clusters/9 mon_election/classic msgr-failures/few msgr/async no_pools objectstore/filestore-xfs rados supported-random-distro$/{rhel_8} tasks/mon_recovery} 3
fail 7199978 2023-03-09 17:15:25 2023-03-12 08:21:15 2023-03-12 09:30:21 1:09:06 0:59:32 0:09:34 smithi main centos 8.stream rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{centos_8} tasks/rados_stress_watch} 2
Failure Reason:

Command failed on smithi017 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7199979 2023-03-09 17:15:26 2023-03-12 08:21:36 2023-03-12 09:30:39 1:09:03 1:00:15 0:08:48 smithi main centos 8.stream rados/singleton/{all/max-pg-per-osd.from-replica mon_election/classic msgr-failures/none msgr/async-v2only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_8}} 1
Failure Reason:

Command failed on smithi136 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7199980 2023-03-09 17:15:28 2023-03-12 08:21:36 2023-03-12 09:34:14 1:12:38 1:02:39 0:09:59 smithi main rhel 8.6 rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/osd-delay objectstore/bluestore-comp-zstd rados recovery-overrides/{more-async-recovery} supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4
Failure Reason:

Command failed on smithi029 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7199981 2023-03-09 17:15:29 2023-03-12 08:24:57 2023-03-12 08:46:54 0:21:57 0:11:21 0:10:36 smithi main ubuntu 20.04 rados/cephadm/workunits/{0-distro/ubuntu_20.04 agent/off mon_election/connectivity task/test_nfs} 1
Failure Reason:

Command failed on smithi046 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:e1535bff13ef9f910f1d4cb360069ee00dc3b970 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid baf83a38-c0b1-11ed-9af2-001a4aab830c -- ceph orch daemon add osd smithi046:vg_nvme/lv_4'

fail 7199982 2023-03-09 17:15:30 2023-03-12 08:24:57 2023-03-12 09:33:50 1:08:53 1:01:47 0:07:06 smithi main rhel 8.6 rados/singleton-nomsgr/{all/crushdiff mon_election/connectivity rados supported-random-distro$/{rhel_8}} 1
Failure Reason:

Command failed on smithi122 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7199983 2023-03-09 17:15:31 2023-03-12 08:25:18 2023-03-12 09:32:50 1:07:32 0:57:43 0:09:49 smithi main ubuntu 20.04 rados/singleton-bluestore/{all/cephtool mon_election/classic msgr-failures/none msgr/async-v2only objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest}} 1
Failure Reason:

Command failed on smithi037 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7199984 2023-03-09 17:15:32 2023-03-12 08:25:18 2023-03-12 09:34:16 1:08:58 0:57:49 0:11:09 smithi main ubuntu 20.04 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{default} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/redirect_promote_tests} 2
Failure Reason:

Command failed on smithi116 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7199985 2023-03-09 17:15:33 2023-03-12 08:26:09 2023-03-12 09:38:14 1:12:05 1:00:39 0:11:26 smithi main centos 8.stream rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/mon-delay msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{centos_8} thrashers/many workloads/rados_mon_osdmap_prune} 2
Failure Reason:

Command failed on smithi151 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7199986 2023-03-09 17:15:34 2023-03-12 08:26:49 2023-03-12 08:48:11 0:21:22 0:11:17 0:10:05 smithi main centos 8.stream rados/cephadm/smoke/{0-distro/centos_8.stream_container_tools 0-nvme-loop agent/off fixed-2 mon_election/connectivity start} 2
Failure Reason:

Command failed on smithi177 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:e1535bff13ef9f910f1d4cb360069ee00dc3b970 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 1081a8d6-c0b2-11ed-9af2-001a4aab830c -- ceph orch daemon add osd smithi177:/dev/nvme4n1'

fail 7199987 2023-03-09 17:15:36 2023-03-12 08:26:50 2023-03-12 09:34:35 1:07:45 0:57:47 0:09:58 smithi main ubuntu 20.04 rados/singleton/{all/mon-auth-caps mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-stupid rados supported-random-distro$/{ubuntu_latest}} 1
Failure Reason:

Command failed on smithi002 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7199988 2023-03-09 17:15:37 2023-03-12 08:26:50 2023-03-12 09:34:55 1:08:05 0:57:47 0:10:18 smithi main ubuntu 20.04 rados/singleton-nomsgr/{all/export-after-evict mon_election/classic rados supported-random-distro$/{ubuntu_latest}} 1
Failure Reason:

Command failed on smithi172 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7199989 2023-03-09 17:15:38 2023-03-12 08:26:50 2023-03-12 08:53:04 0:26:14 0:14:32 0:11:42 smithi main centos 8.stream rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus-v2only backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat mon_election/classic msgr-failures/fastclose rados thrashers/careful thrashosds-health workloads/test_rbd_api} 3
Failure Reason:

Command failed on smithi084 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:e1535bff13ef9f910f1d4cb360069ee00dc3b970 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 8e302744-c0b2-11ed-9af2-001a4aab830c -- ceph orch daemon add osd smithi084:vg_nvme/lv_4'

fail 7199990 2023-03-09 17:15:39 2023-03-12 08:27:01 2023-03-12 09:34:43 1:07:42 0:57:44 0:09:58 smithi main ubuntu 20.04 rados/perf/{ceph mon_election/connectivity objectstore/bluestore-comp openstack scheduler/dmclock_default_shards settings/optimized ubuntu_latest workloads/fio_4M_rand_write} 1
Failure Reason:

Command failed on smithi050 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7199991 2023-03-09 17:15:40 2023-03-12 08:27:01 2023-03-12 09:37:23 1:10:22 1:02:25 0:07:57 smithi main rhel 8.6 rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/osd-delay objectstore/bluestore-hybrid rados recovery-overrides/{more-async-recovery} supported-random-distro$/{rhel_8} thrashers/default thrashosds-health workloads/ec-small-objects-balanced} 2
Failure Reason:

Command failed on smithi007 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7199992 2023-03-09 17:15:41 2023-03-12 08:28:32 2023-03-12 09:36:45 1:08:13 0:58:09 0:10:04 smithi main ubuntu 20.04 rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/enable mon_election/connectivity random-objectstore$/{bluestore-comp-snappy} supported-random-distro$/{ubuntu_latest} tasks/progress} 2
Failure Reason:

Command failed on smithi061 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7199993 2023-03-09 17:15:43 2023-03-12 08:28:42 2023-03-12 09:37:58 1:09:16 1:02:09 0:07:07 smithi main rhel 8.6 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-dispatch-delay msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{rhel_8} thrashers/mapgap thrashosds-health workloads/redirect_set_object} 2
Failure Reason:

Command failed on smithi145 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7199994 2023-03-09 17:15:44 2023-03-12 08:28:52 2023-03-12 08:54:01 0:25:09 0:19:35 0:05:34 smithi main rhel 8.6 rados/cephadm/osds/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop 1-start 2-ops/rm-zap-flag} 2
Failure Reason:

reached maximum tries (120) after waiting for 120 seconds

fail 7199995 2023-03-09 17:15:45 2023-03-12 08:28:53 2023-03-12 09:38:01 1:09:08 1:02:09 0:06:59 smithi main rhel 8.6 rados/objectstore/{backends/filejournal supported-random-distro$/{rhel_8}} 1
Failure Reason:

Command failed on smithi150 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

pass 7199996 2023-03-09 17:15:46 2023-03-12 08:28:53 2023-03-12 08:51:20 0:22:27 0:14:59 0:07:28 smithi main centos 8.stream rados/singleton/{all/mon-config-key-caps mon_election/classic msgr-failures/many msgr/async-v1only objectstore/filestore-xfs rados supported-random-distro$/{centos_8}} 1
fail 7199997 2023-03-09 17:15:47 2023-03-12 08:28:53 2023-03-12 09:40:04 1:11:11 1:00:44 0:10:27 smithi main centos 8.stream rados/singleton-nomsgr/{all/full-tiering mon_election/connectivity rados supported-random-distro$/{centos_8}} 1
Failure Reason:

Command failed on smithi057 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7199998 2023-03-09 17:15:48 2023-03-12 08:28:54 2023-03-12 08:52:41 0:23:47 0:13:35 0:10:12 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools agent/on mon_election/classic task/test_orch_cli} 1
Failure Reason:

Command failed on smithi115 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:e1535bff13ef9f910f1d4cb360069ee00dc3b970 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid ca26319e-c0b2-11ed-9af2-001a4aab830c -- ceph orch daemon add osd smithi115:vg_nvme/lv_4'

dead 7199999 2023-03-09 17:15:49 2023-03-12 08:28:54 2023-03-12 20:39:06 12:10:12 smithi main rhel 8.6 rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async objectstore/filestore-xfs rados supported-random-distro$/{rhel_8} tasks/rados_striper} 2
Failure Reason:

hit max job timeout

fail 7200000 2023-03-09 17:15:51 2023-03-12 08:29:35 2023-03-12 09:38:45 1:09:10 1:01:46 0:07:24 smithi main rhel 8.6 rados/singleton/{all/mon-config-keys mon_election/connectivity msgr-failures/none msgr/async-v2only objectstore/bluestore-bitmap rados supported-random-distro$/{rhel_8}} 1
Failure Reason:

Command failed on smithi133 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7200001 2023-03-09 17:15:52 2023-03-12 08:30:15 2023-03-12 09:41:49 1:11:34 1:00:41 0:10:53 smithi main centos 8.stream rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{default} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_8} thrashers/morepggrow thrashosds-health workloads/set-chunks-read} 2
Failure Reason:

Command failed on smithi062 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7200002 2023-03-09 17:15:53 2023-03-12 08:30:36 2023-03-12 09:49:41 1:19:05 1:09:25 0:09:40 smithi main centos 8.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v1only objectstore/bluestore-stupid rados tasks/mon_recovery validater/valgrind} 2
Failure Reason:

Command failed on smithi138 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

dead 7200003 2023-03-09 17:15:54 2023-03-12 08:30:36 2023-03-12 20:40:45 12:10:09 smithi main rhel 8.6 rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/osd-delay objectstore/filestore-xfs rados recovery-overrides/{more-active-recovery} supported-random-distro$/{rhel_8} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=lrc-k=4-m=2-l=3} 3
Failure Reason:

hit max job timeout

fail 7200004 2023-03-09 17:15:55 2023-03-12 08:30:36 2023-03-12 09:16:07 0:45:31 0:35:35 0:09:56 smithi main centos 8.stream rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/osd-delay objectstore/filestore-xfs rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{centos_8} thrashers/none thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} 2
Failure Reason:

failed to complete snap trimming before timeout

fail 7200005 2023-03-09 17:15:56 2023-03-12 08:30:57 2023-03-12 09:39:59 1:09:02 1:02:05 0:06:57 smithi main rhel 8.6 rados/singleton-nomsgr/{all/health-warnings mon_election/classic rados supported-random-distro$/{rhel_8}} 1
Failure Reason:

Command failed on smithi119 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7200006 2023-03-09 17:15:58 2023-03-12 08:31:17 2023-03-12 09:05:38 0:34:21 0:23:31 0:10:50 smithi main ubuntu 20.04 rados/standalone/{supported-random-distro$/{ubuntu_latest} workloads/erasure-code} 1
Failure Reason:

Command failed (workunit test erasure-code/test-erasure-code.sh) on smithi173 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=e1535bff13ef9f910f1d4cb360069ee00dc3b970 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/erasure-code/test-erasure-code.sh'

fail 7200007 2023-03-09 17:15:59 2023-03-12 08:31:48 2023-03-12 08:53:20 0:21:32 0:11:29 0:10:03 smithi main centos 8.stream rados/cephadm/smoke/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop agent/on fixed-2 mon_election/classic start} 2
Failure Reason:

Command failed on smithi135 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:e1535bff13ef9f910f1d4cb360069ee00dc3b970 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c35df5b8-c0b2-11ed-9af2-001a4aab830c -- ceph orch daemon add osd smithi135:/dev/nvme4n1'

pass 7200008 2023-03-09 17:16:00 2023-03-12 08:32:08 2023-03-12 09:03:22 0:31:14 0:19:37 0:11:37 smithi main ubuntu 20.04 rados/multimon/{clusters/21 mon_election/connectivity msgr-failures/many msgr/async-v1only no_pools objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_latest} tasks/mon_recovery} 3
fail 7200009 2023-03-09 17:16:01 2023-03-12 08:32:59 2023-03-12 09:42:33 1:09:34 1:02:18 0:07:16 smithi main rhel 8.6 rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-hybrid rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{rhel_8} thrashers/default thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4
Failure Reason:

Command failed on smithi073 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7200010 2023-03-09 17:16:02 2023-03-12 08:33:29 2023-03-12 09:41:17 1:07:48 0:57:36 0:10:12 smithi main ubuntu 20.04 rados/singleton/{all/mon-config mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-lz4 rados supported-random-distro$/{ubuntu_latest}} 1
Failure Reason:

Command failed on smithi082 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7200011 2023-03-09 17:16:03 2023-03-12 08:33:30 2023-03-12 09:41:08 1:07:38 0:57:24 0:10:14 smithi main ubuntu 20.04 rados/perf/{ceph mon_election/classic objectstore/bluestore-low-osd-mem-target openstack scheduler/wpq_default_shards settings/optimized ubuntu_latest workloads/radosbench_4K_rand_read} 1
Failure Reason:

Command failed on smithi053 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7200012 2023-03-09 17:16:05 2023-03-12 08:33:40 2023-03-12 09:45:41 1:12:01 1:00:49 0:11:12 smithi main centos 8.stream rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8} thrashers/none thrashosds-health workloads/small-objects-balanced} 2
Failure Reason:

Command failed on smithi008 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7200013 2023-03-09 17:16:06 2023-03-12 08:33:40 2023-03-12 09:03:10 0:29:30 0:16:04 0:13:26 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools_crun agent/off mon_election/connectivity task/test_orch_cli_mon} 5
Failure Reason:

Command failed on smithi016 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:e1535bff13ef9f910f1d4cb360069ee00dc3b970 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid df0be1ac-c0b3-11ed-9af2-001a4aab830c -- ceph orch daemon add osd smithi016:vg_nvme/lv_4'

fail 7200014 2023-03-09 17:16:07 2023-03-12 08:34:51 2023-03-12 08:53:22 0:18:31 0:06:23 0:12:08 smithi main ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/none cluster/3-node k8s/1.21 net/host rook/master} 3
Failure Reason:

Command failed on smithi031 with status 1: 'sudo systemctl enable --now kubelet && sudo kubeadm config images pull'

fail 7200015 2023-03-09 17:16:08 2023-03-12 08:35:52 2023-03-12 08:59:34 0:23:42 0:13:36 0:10:06 smithi main centos 8.stream rados/dashboard/{0-single-container-host debug/mgr mon_election/connectivity random-objectstore$/{bluestore-bitmap} tasks/e2e} 2
Failure Reason:

Command failed on smithi117 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:e1535bff13ef9f910f1d4cb360069ee00dc3b970 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid ce933d20-c0b3-11ed-9af2-001a4aab830c -- ceph orch daemon add osd smithi117:vg_nvme/lv_4'

fail 7200016 2023-03-09 17:16:09 2023-03-12 08:36:02 2023-03-12 09:44:15 1:08:13 0:57:32 0:10:41 smithi main ubuntu 20.04 rados/singleton-nomsgr/{all/large-omap-object-warnings mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} 1
Failure Reason:

Command failed on smithi067 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7200017 2023-03-09 17:16:11 2023-03-12 08:36:23 2023-03-12 09:45:46 1:09:23 0:57:58 0:11:25 smithi main ubuntu 20.04 rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/fastclose rados recovery-overrides/{more-active-recovery} supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/ec-pool-snaps-few-objects-overwrites} 2
Failure Reason:

Command failed on smithi191 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7200018 2023-03-09 17:16:12 2023-03-12 08:37:53 2023-03-12 11:06:34 2:28:41 2:09:32 0:19:09 smithi main centos 8.stream rados/monthrash/{ceph clusters/9-mons mon_election/connectivity msgr-failures/few msgr/async objectstore/filestore-xfs rados supported-random-distro$/{centos_8} thrashers/one workloads/rados_mon_workunits} 2
Failure Reason:

Command failed (workunit test mon/caps.sh) on smithi194 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=e1535bff13ef9f910f1d4cb360069ee00dc3b970 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/mon/caps.sh'

fail 7200019 2023-03-09 17:16:13 2023-03-12 08:46:25 2023-03-12 09:54:51 1:08:26 1:01:39 0:06:47 smithi main rhel 8.6 rados/singleton/{all/osd-backfill mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-snappy rados supported-random-distro$/{rhel_8}} 1
Failure Reason:

Command failed on smithi003 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7200020 2023-03-09 17:16:14 2023-03-12 08:46:25 2023-03-12 09:12:01 0:25:36 0:18:48 0:06:48 smithi main rhel 8.6 rados/cephadm/osds/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-ops/rm-zap-wait} 2
Failure Reason:

reached maximum tries (120) after waiting for 120 seconds

fail 7200021 2023-03-09 17:16:15 2023-03-12 08:46:56 2023-03-12 09:55:37 1:08:41 0:57:30 0:11:11 smithi main ubuntu 20.04 rados/objectstore/{backends/filestore-idempotent-aio-journal supported-random-distro$/{ubuntu_latest}} 1
Failure Reason:

Command failed on smithi203 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7200022 2023-03-09 17:16:16 2023-03-12 08:47:16 2023-03-12 09:57:42 1:10:26 0:58:02 0:12:24 smithi main ubuntu 20.04 rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-low-osd-mem-target rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/fastread thrashosds-health workloads/ec-small-objects-fast-read} 2
Failure Reason:

Command failed on smithi072 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7200023 2023-03-09 17:16:18 2023-03-12 08:47:46 2023-03-12 09:56:25 1:08:39 0:57:48 0:10:51 smithi main ubuntu 20.04 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/small-objects-localized} 2
Failure Reason:

Command failed on smithi177 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7200024 2023-03-09 17:16:19 2023-03-12 08:48:17 2023-03-12 09:57:02 1:08:45 1:01:22 0:07:23 smithi main rhel 8.6 rados/singleton-nomsgr/{all/lazy_omap_stats_output mon_election/classic rados supported-random-distro$/{rhel_8}} 1
Failure Reason:

Command failed on smithi148 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7200025 2023-03-09 17:16:20 2023-03-12 08:49:38 2023-03-12 09:59:14 1:09:36 1:01:48 0:07:48 smithi main rhel 8.6 rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/disable mon_election/classic random-objectstore$/{bluestore-comp-zstd} supported-random-distro$/{rhel_8} tasks/prometheus} 2
Failure Reason:

Command failed on smithi040 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7200026 2023-03-09 17:16:21 2023-03-12 08:50:08 2023-03-12 09:57:38 1:07:30 1:01:18 0:06:12 smithi main rhel 8.6 rados/singleton/{all/osd-recovery-incomplete mon_election/classic msgr-failures/none msgr/async-v2only objectstore/bluestore-comp-zlib rados supported-random-distro$/{rhel_8}} 1
Failure Reason:

Command failed on smithi141 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7200027 2023-03-09 17:16:22 2023-03-12 08:50:38 2023-03-12 09:59:48 1:09:10 0:59:18 0:09:52 smithi main centos 8.stream rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{centos_8} tasks/rados_workunit_loadgen_big} 2
Failure Reason:

Command failed on smithi097 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7200028 2023-03-09 17:16:24 2023-03-12 08:50:39 2023-03-12 09:12:24 0:21:45 0:11:01 0:10:44 smithi main centos 8.stream rados/cephadm/smoke-singlehost/{0-random-distro$/{centos_8.stream_container_tools} 1-start 2-services/rgw 3-final} 1
Failure Reason:

Command failed on smithi090 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:e1535bff13ef9f910f1d4cb360069ee00dc3b970 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 6a99a12c-c0b5-11ed-9af2-001a4aab830c -- ceph orch daemon add osd smithi090:vg_nvme/lv_4'

fail 7200029 2023-03-09 17:16:25 2023-03-12 08:50:39 2023-03-12 09:59:40 1:09:01 1:00:20 0:08:41 smithi main centos 8.stream rados/singleton-nomsgr/{all/librados_hello_world mon_election/connectivity rados supported-random-distro$/{centos_8}} 1
Failure Reason:

Command failed on smithi121 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7200030 2023-03-09 17:16:26 2023-03-12 08:50:40 2023-03-12 09:59:19 1:08:39 0:57:43 0:10:56 smithi main ubuntu 20.04 rados/perf/{ceph mon_election/connectivity objectstore/bluestore-stupid openstack scheduler/dmclock_1Shard_16Threads settings/optimized ubuntu_latest workloads/radosbench_4K_seq_read} 1
Failure Reason:

Command failed on smithi035 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7200031 2023-03-09 17:16:27 2023-03-12 08:51:30 2023-03-12 10:00:57 1:09:27 0:57:49 0:11:38 smithi main ubuntu 20.04 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{default} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-dispatch-delay msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/small-objects} 2
Failure Reason:

Command failed on smithi045 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7200032 2023-03-09 17:16:28 2023-03-12 08:52:51 2023-03-12 09:17:27 0:24:36 0:14:07 0:10:29 smithi main centos 8.stream rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/3-size-2-min-size 1-install/nautilus backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/on mon_election/connectivity msgr-failures/few rados thrashers/default thrashosds-health workloads/cache-snaps} 3
Failure Reason:

Command failed on smithi084 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:e1535bff13ef9f910f1d4cb360069ee00dc3b970 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 27503e52-c0b6-11ed-9af2-001a4aab830c -- ceph orch daemon add osd smithi084:vg_nvme/lv_4'

fail 7200033 2023-03-09 17:16:30 2023-03-12 08:53:11 2023-03-12 09:59:54 1:06:43 1:01:22 0:05:21 smithi main rhel 8.6 rados/singleton/{all/osd-recovery mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-zstd rados supported-random-distro$/{rhel_8}} 1
Failure Reason:

Command failed on smithi115 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7200034 2023-03-09 17:16:31 2023-03-12 08:53:11 2023-03-12 09:12:05 0:18:54 0:11:41 0:07:13 smithi main rhel 8.6 rados/cephadm/smoke/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop agent/off fixed-2 mon_election/connectivity start} 2
Failure Reason:

Command failed on smithi006 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:e1535bff13ef9f910f1d4cb360069ee00dc3b970 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 33ae1b98-c0b5-11ed-9af2-001a4aab830c -- ceph orch daemon add osd smithi006:/dev/nvme4n1'

fail 7200035 2023-03-09 17:16:32 2023-03-12 08:53:22 2023-03-12 15:17:47 6:24:25 6:13:44 0:10:41 smithi main centos 8.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v2only objectstore/filestore-xfs rados tasks/rados_api_tests validater/lockdep} 2
Failure Reason:

Command failed (workunit test rados/test.sh) on smithi135 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=e1535bff13ef9f910f1d4cb360069ee00dc3b970 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 ALLOW_TIMEOUTS=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh'

fail 7200036 2023-03-09 17:16:33 2023-03-12 08:53:22 2023-03-12 10:02:06 1:08:44 0:57:51 0:10:53 smithi main ubuntu 20.04 rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/osd-dispatch-delay objectstore/bluestore-bitmap rados recovery-overrides/{more-async-recovery} supported-random-distro$/{ubuntu_latest} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} 3
Failure Reason:

Command failed on smithi120 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7200037 2023-03-09 17:16:34 2023-03-12 08:53:23 2023-03-12 10:04:09 1:10:46 1:00:48 0:09:58 smithi main centos 8.stream rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/osd-dispatch-delay objectstore/bluestore-bitmap rados recovery-overrides/{more-active-recovery} supported-random-distro$/{centos_8} thrashers/none thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} 2
Failure Reason:

Command failed on smithi031 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

pass 7200038 2023-03-09 17:16:36 2023-03-12 08:53:23 2023-03-12 09:13:11 0:19:48 0:12:25 0:07:23 smithi main rhel 8.6 rados/multimon/{clusters/3 mon_election/classic msgr-failures/few msgr/async-v2only no_pools objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8} tasks/mon_clock_no_skews} 2
fail 7200039 2023-03-09 17:16:37 2023-03-12 08:54:04 2023-03-12 10:08:01 1:13:57 1:00:53 0:13:04 smithi main centos 8.stream rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/fastclose objectstore/bluestore-low-osd-mem-target rados recovery-overrides/{default} supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4
Failure Reason:

Command failed on smithi023 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

pass 7200040 2023-03-09 17:16:38 2023-03-12 08:55:54 2023-03-12 09:17:35 0:21:41 0:14:21 0:07:20 smithi main rhel 8.6 rados/cephadm/workunits/{0-distro/rhel_8.6_container_tools_3.0 agent/on mon_election/classic task/test_adoption} 1
pass 7200041 2023-03-09 17:16:39 2023-03-12 08:57:15 2023-03-12 09:24:16 0:27:01 0:18:21 0:08:40 smithi main centos 8.stream rados/singleton-nomsgr/{all/msgr mon_election/classic rados supported-random-distro$/{centos_8}} 1
pass 7200042 2023-03-09 17:16:40 2023-03-12 08:57:15 2023-03-12 09:25:15 0:28:00 0:19:19 0:08:41 smithi main rhel 8.6 rados/standalone/{supported-random-distro$/{rhel_8} workloads/mgr} 1
fail 7200043 2023-03-09 17:16:41 2023-03-12 08:59:36 2023-03-12 10:18:30 1:18:54 1:08:33 0:10:21 smithi main centos 8.stream rados/valgrind-leaks/{1-start 2-inject-leak/none centos_latest} 1
Failure Reason:

Command failed on smithi117 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7200044 2023-03-09 17:16:42 2023-03-12 08:59:36 2023-03-12 10:08:52 1:09:16 1:00:38 0:08:38 smithi main centos 8.stream rados/singleton/{all/peer mon_election/classic msgr-failures/many msgr/async-v1only objectstore/bluestore-hybrid rados supported-random-distro$/{centos_8}} 1
Failure Reason:

Command failed on smithi129 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7200045 2023-03-09 17:16:44 2023-03-12 08:59:47 2023-03-12 10:11:06 1:11:19 1:00:40 0:10:39 smithi main centos 8.stream rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/snaps-few-objects-balanced} 2
Failure Reason:

Command failed on smithi026 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7200046 2023-03-09 17:16:45 2023-03-12 09:00:07 2023-03-12 09:42:42 0:42:35 0:30:58 0:11:37 smithi main ubuntu 20.04 rados/cephadm/osds/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-ops/rmdir-reactivate} 2
Failure Reason:

reached maximum tries (120) after waiting for 120 seconds

fail 7200047 2023-03-09 17:16:46 2023-03-12 09:00:47 2023-03-12 10:09:54 1:09:07 1:01:45 0:07:22 smithi main rhel 8.6 rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/mon-delay msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{rhel_8} thrashers/sync-many workloads/rados_mon_workunits} 2
Failure Reason:

Command failed on smithi005 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7200048 2023-03-09 17:16:47 2023-03-12 09:00:58 2023-03-12 10:12:00 1:11:02 1:01:59 0:09:03 smithi main rhel 8.6 rados/singleton/{all/pg-autoscaler-progress-off mon_election/connectivity msgr-failures/none msgr/async-v2only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{rhel_8}} 2
Failure Reason:

Command failed on smithi016 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7200049 2023-03-09 17:16:48 2023-03-12 09:03:19 2023-03-12 10:13:46 1:10:27 1:01:46 0:08:41 smithi main rhel 8.6 rados/singleton-nomsgr/{all/multi-backfill-reject mon_election/connectivity rados supported-random-distro$/{rhel_8}} 2
Failure Reason:

Command failed on smithi027 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7200050 2023-03-09 17:16:49 2023-03-12 09:03:19 2023-03-12 10:11:47 1:08:28 0:57:36 0:10:52 smithi main ubuntu 20.04 rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{ubuntu_latest} tasks/rados_workunit_loadgen_mix} 2
Failure Reason:

Command failed on smithi143 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7200051 2023-03-09 17:16:50 2023-03-12 09:03:29 2023-03-12 10:12:06 1:08:37 0:57:48 0:10:49 smithi main ubuntu 20.04 rados/objectstore/{backends/filestore-idempotent supported-random-distro$/{ubuntu_latest}} 1
Failure Reason:

Command failed on smithi111 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

dead 7200052 2023-03-09 17:16:52 2023-03-12 09:03:30 2023-03-12 21:13:51 12:10:21 smithi main centos 8.stream rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{default} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async objectstore/filestore-xfs rados supported-random-distro$/{centos_8} thrashers/mapgap thrashosds-health workloads/snaps-few-objects-localized} 2
Failure Reason:

hit max job timeout

fail 7200053 2023-03-09 17:16:53 2023-03-12 09:05:40 2023-03-12 10:19:07 1:13:27 0:57:37 0:15:50 smithi main ubuntu 20.04 rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/fastclose objectstore/bluestore-stupid rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/minsize_recovery thrashosds-health workloads/ec-small-objects-many-deletes} 2
Failure Reason:

Command failed on smithi099 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7200054 2023-03-09 17:16:54 2023-03-12 09:11:11 2023-03-12 09:29:01 0:17:50 0:11:29 0:06:21 smithi main rhel 8.6 rados/cephadm/smoke/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop agent/on fixed-2 mon_election/classic start} 2
Failure Reason:

Command failed on smithi046 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:e1535bff13ef9f910f1d4cb360069ee00dc3b970 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d822fd36-c0b7-11ed-9af2-001a4aab830c -- ceph orch daemon add osd smithi046:/dev/nvme4n1'

fail 7200055 2023-03-09 17:16:55 2023-03-12 09:12:02 2023-03-12 10:19:56 1:07:54 0:57:25 0:10:29 smithi main ubuntu 20.04 rados/perf/{ceph mon_election/classic objectstore/bluestore-basic-min-osd-mem-target openstack scheduler/dmclock_default_shards settings/optimized ubuntu_latest workloads/radosbench_4M_rand_read} 1
Failure Reason:

Command failed on smithi131 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7200056 2023-03-09 17:16:56 2023-03-12 09:12:02 2023-03-12 10:20:28 1:08:26 1:01:42 0:06:44 smithi main rhel 8.6 rados/singleton-bluestore/{all/cephtool mon_election/connectivity msgr-failures/none msgr/async objectstore/bluestore-bitmap rados supported-random-distro$/{rhel_8}} 1
Failure Reason:

Command failed on smithi006 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7200057 2023-03-09 17:16:58 2023-03-12 09:12:13 2023-03-12 10:20:52 1:08:39 1:00:30 0:08:09 smithi main centos 8.stream rados/singleton/{all/pg-autoscaler mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-stupid rados supported-random-distro$/{centos_8}} 1
Failure Reason:

Command failed on smithi032 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7200058 2023-03-09 17:16:59 2023-03-12 09:12:13 2023-03-12 10:20:20 1:08:07 1:01:14 0:06:53 smithi main rhel 8.6 rados/singleton-nomsgr/{all/osd_stale_reads mon_election/classic rados supported-random-distro$/{rhel_8}} 1
Failure Reason:

Command failed on smithi090 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7200059 2023-03-09 17:17:00 2023-03-12 09:12:34 2023-03-12 10:22:23 1:09:49 1:00:16 0:09:33 smithi main centos 8.stream rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/enable mon_election/connectivity random-objectstore$/{bluestore-comp-snappy} supported-random-distro$/{centos_8} tasks/workunits} 2
Failure Reason:

Command failed on smithi081 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7200060 2023-03-09 17:17:01 2023-03-12 09:13:14 2023-03-12 09:42:36 0:29:22 0:21:49 0:07:33 smithi main rhel 8.6 rados/cephadm/workunits/{0-distro/rhel_8.6_container_tools_rhel8 agent/off mon_election/connectivity task/test_cephadm} 1
Failure Reason:

SELinux denials found on ubuntu@smithi047.front.sepia.ceph.com: ['type=AVC msg=audit(1678613947.303:19827): avc: denied { ioctl } for pid=109500 comm="iptables" path="/var/lib/containers/storage/overlay/b1d2b7621c7b6e3af7dac90f2cfc1db1f59aaff99e7db32c7e1ac15842934d6c/merged" dev="overlay" ino=3412143 scontext=system_u:system_r:iptables_t:s0 tcontext=system_u:object_r:container_file_t:s0:c1022,c1023 tclass=dir permissive=1']

fail 7200061 2023-03-09 17:17:02 2023-03-12 09:13:54 2023-03-12 10:25:26 1:11:32 1:00:20 0:11:12 smithi main centos 8.stream rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/few rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/ec-small-objects-fast-read-overwrites} 2
Failure Reason:

Command failed on smithi112 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7200062 2023-03-09 17:17:04 2023-03-12 09:16:15 2023-03-12 10:26:02 1:09:47 1:01:45 0:08:02 smithi main rhel 8.6 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{default} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{rhel_8} thrashers/morepggrow thrashosds-health workloads/snaps-few-objects} 2
Failure Reason:

Command failed on smithi019 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

pass 7200063 2023-03-09 17:17:05 2023-03-12 09:17:06 2023-03-12 09:36:37 0:19:31 0:12:35 0:06:56 smithi main rhel 8.6 rados/singleton/{all/pg-removal-interruption mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/filestore-xfs rados supported-random-distro$/{rhel_8}} 1
fail 7200064 2023-03-09 17:17:06 2023-03-12 09:17:36 2023-03-12 10:36:56 1:19:20 1:09:19 0:10:01 smithi main centos 8.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-bitmap rados tasks/rados_api_tests validater/valgrind} 2
Failure Reason:

Command failed on smithi161 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7200065 2023-03-09 17:17:07 2023-03-12 09:17:36 2023-03-12 10:28:40 1:11:04 0:57:50 0:13:14 smithi main ubuntu 20.04 rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/fastclose objectstore/bluestore-comp-lz4 rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=lrc-k=4-m=2-l=3} 3
Failure Reason:

Command failed on smithi042 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7200066 2023-03-09 17:17:08 2023-03-12 09:20:07 2023-03-12 10:28:02 1:07:55 1:01:25 0:06:30 smithi main rhel 8.6 rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/fastclose objectstore/bluestore-comp-lz4 rados recovery-overrides/{default} supported-random-distro$/{rhel_8} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} 2
Failure Reason:

Command failed on smithi107 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7200067 2023-03-09 17:17:09 2023-03-12 09:20:48 2023-03-12 10:31:53 1:11:05 0:57:24 0:13:41 smithi main ubuntu 20.04 rados/singleton-nomsgr/{all/pool-access mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} 1
Failure Reason:

Command failed on smithi052 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7200068 2023-03-09 17:17:10 2023-03-12 09:23:18 2023-03-12 10:04:34 0:41:16 0:30:37 0:10:39 smithi main ubuntu 20.04 rados/cephadm/osds/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-ops/repave-all} 2
Failure Reason:

reached maximum tries (120) after waiting for 120 seconds

pass 7200069 2023-03-09 17:17:12 2023-03-12 09:24:19 2023-03-12 09:45:45 0:21:26 0:08:22 0:13:04 smithi main ubuntu 20.04 rados/multimon/{clusters/6 mon_election/connectivity msgr-failures/many msgr/async no_pools objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest} tasks/mon_clock_with_skews} 2
fail 7200070 2023-03-09 17:17:13 2023-03-12 09:27:10 2023-03-12 10:40:11 1:13:01 1:00:44 0:12:17 smithi main centos 8.stream rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/connectivity msgr-failures/few objectstore/bluestore-stupid rados recovery-overrides/{default} supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4
Failure Reason:

Command failed on smithi012 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7200071 2023-03-09 17:17:14 2023-03-12 09:29:01 2023-03-12 10:38:20 1:09:19 1:00:18 0:09:01 smithi main centos 8.stream rados/singleton/{all/radostool mon_election/classic msgr-failures/none msgr/async-v2only objectstore/bluestore-bitmap rados supported-random-distro$/{centos_8}} 1
Failure Reason:

Command failed on smithi100 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7200072 2023-03-09 17:17:15 2023-03-12 09:29:11 2023-03-12 10:39:42 1:10:31 1:01:31 0:09:00 smithi main rhel 8.6 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-dispatch-delay msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8} thrashers/none thrashosds-health workloads/write_fadvise_dontneed} 2
Failure Reason:

Command failed on smithi083 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7200073 2023-03-09 17:17:16 2023-03-12 09:30:21 2023-03-12 10:38:11 1:07:50 0:57:48 0:10:02 smithi main ubuntu 20.04 rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest} tasks/rados_workunit_loadgen_mostlyread} 2
Failure Reason:

Command failed on smithi017 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

pass 7200074 2023-03-09 17:17:17 2023-03-12 09:30:32 2023-03-12 09:48:37 0:18:05 0:06:55 0:11:10 smithi main ubuntu 20.04 rados/cephadm/workunits/{0-distro/ubuntu_20.04 agent/on mon_election/classic task/test_cephadm_repos} 1
pass 7200075 2023-03-09 17:17:18 2023-03-12 09:30:32 2023-03-12 10:12:32 0:42:00 0:31:11 0:10:49 smithi main ubuntu 20.04 rados/singleton-nomsgr/{all/recovery-unfound-found mon_election/classic rados supported-random-distro$/{ubuntu_latest}} 1
fail 7200076 2023-03-09 17:17:20 2023-03-12 09:30:42 2023-03-12 13:12:37 3:41:55 3:29:43 0:12:12 smithi main ubuntu 20.04 rados/standalone/{supported-random-distro$/{ubuntu_latest} workloads/misc} 1
Failure Reason:

Command failed (workunit test misc/test-ceph-helpers.sh) on smithi037 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=e1535bff13ef9f910f1d4cb360069ee00dc3b970 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/misc/test-ceph-helpers.sh'

fail 7200077 2023-03-09 17:17:21 2023-03-12 09:32:53 2023-03-12 10:41:37 1:08:44 0:57:35 0:11:09 smithi main ubuntu 20.04 rados/perf/{ceph mon_election/connectivity objectstore/bluestore-bitmap openstack scheduler/wpq_default_shards settings/optimized ubuntu_latest workloads/radosbench_4M_seq_read} 1
Failure Reason:

Command failed on smithi122 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7200078 2023-03-09 17:17:22 2023-03-12 09:33:54 2023-03-12 09:59:52 0:25:58 0:14:28 0:11:30 smithi main centos 8.stream rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/2-size-2-min-size 1-install/octopus backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/crush-compat mon_election/classic msgr-failures/osd-delay rados thrashers/mapgap thrashosds-health workloads/radosbench} 3
Failure Reason:

Command failed on smithi029 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:e1535bff13ef9f910f1d4cb360069ee00dc3b970 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e78012b0-c0bb-11ed-9af2-001a4aab830c -- ceph orch daemon add osd smithi029:vg_nvme/lv_4'

fail 7200079 2023-03-09 17:17:23 2023-03-12 09:34:24 2023-03-12 09:55:35 0:21:11 0:12:31 0:08:40 smithi main centos 8.stream rados/objectstore/{backends/fusestore supported-random-distro$/{centos_8}} 1
Failure Reason:

Command failed (workunit test objectstore/test_fuse.sh) on smithi154 with status 139: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=e1535bff13ef9f910f1d4cb360069ee00dc3b970 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/objectstore/test_fuse.sh'

fail 7200080 2023-03-09 17:17:24 2023-03-12 09:34:24 2023-03-12 10:42:08 1:07:44 0:57:48 0:09:56 smithi main ubuntu 20.04 rados/monthrash/{ceph clusters/9-mons mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{ubuntu_latest} thrashers/sync workloads/snaps-few-objects} 2
Failure Reason:

Command failed on smithi033 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7200081 2023-03-09 17:17:26 2023-03-12 09:34:25 2023-03-12 10:43:31 1:09:06 1:01:59 0:07:07 smithi main rhel 8.6 rados/singleton/{all/random-eio mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8}} 2
Failure Reason:

Command failed on smithi002 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7200082 2023-03-09 17:17:27 2023-03-12 09:34:45 2023-03-12 09:57:01 0:22:16 0:10:00 0:12:16 smithi main ubuntu 20.04 rados/cephadm/smoke/{0-distro/ubuntu_20.04 0-nvme-loop agent/off fixed-2 mon_election/connectivity start} 2
Failure Reason:

Command failed on smithi084 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:e1535bff13ef9f910f1d4cb360069ee00dc3b970 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 904805f2-c0bb-11ed-9af2-001a4aab830c -- ceph orch daemon add osd smithi084:/dev/nvme4n1'

fail 7200083 2023-03-09 17:17:28 2023-03-12 09:36:46 2023-03-12 10:45:47 1:09:01 1:02:07 0:06:54 smithi main rhel 8.6 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{default} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{rhel_8} thrashers/pggrow thrashosds-health workloads/admin_socket_objecter_requests} 2
Failure Reason:

Command failed on smithi061 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7200084 2023-03-09 17:17:29 2023-03-12 09:36:46 2023-03-12 10:48:12 1:11:26 1:00:40 0:10:46 smithi main centos 8.stream rados/singleton-nomsgr/{all/version-number-sanity mon_election/connectivity rados supported-random-distro$/{centos_8}} 1
Failure Reason:

Command failed on smithi007 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

dead 7200085 2023-03-09 17:17:30 2023-03-12 09:37:27 2023-03-12 21:47:17 12:09:50 smithi main rhel 8.6 rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/few objectstore/filestore-xfs rados recovery-overrides/{more-async-recovery} supported-random-distro$/{rhel_8} thrashers/morepggrow thrashosds-health workloads/ec-small-objects} 2
Failure Reason:

hit max job timeout

fail 7200086 2023-03-09 17:17:31 2023-03-12 09:38:07 2023-03-12 10:46:35 1:08:28 0:57:30 0:10:58 smithi main ubuntu 20.04 rados/singleton/{all/rebuild-mondb mon_election/classic msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest}} 1
Failure Reason:

Command failed on smithi178 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7200087 2023-03-09 17:17:32 2023-03-12 09:38:07 2023-03-12 10:01:13 0:23:06 0:13:31 0:09:35 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools agent/off mon_election/connectivity task/test_iscsi_pids_limit/{centos_8.stream_container_tools test_iscsi_pids_limit}} 1
Failure Reason:

Command failed on smithi150 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:e1535bff13ef9f910f1d4cb360069ee00dc3b970 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 54a8fdac-c0bc-11ed-9af2-001a4aab830c -- ceph orch daemon add osd smithi150:vg_nvme/lv_4'

fail 7200088 2023-03-09 17:17:34 2023-03-12 09:38:08 2023-03-12 10:49:43 1:11:35 1:00:24 0:11:11 smithi main centos 8.stream rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/enable mon_election/classic random-objectstore$/{bluestore-comp-lz4} supported-random-distro$/{centos_8} tasks/crash} 2
Failure Reason:

Command failed on smithi151 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7200089 2023-03-09 17:17:35 2023-03-12 09:38:18 2023-03-12 09:56:18 0:18:00 0:06:22 0:11:38 smithi main ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/radosbench cluster/3-node k8s/1.21 net/calico rook/1.7.2} 3
Failure Reason:

Command failed on smithi057 with status 1: 'sudo systemctl enable --now kubelet && sudo kubeadm config images pull'

fail 7200090 2023-03-09 17:17:36 2023-03-12 09:40:09 2023-03-12 10:52:09 1:12:00 1:01:59 0:10:01 smithi main centos 8.stream rados/dashboard/{0-single-container-host debug/mgr mon_election/connectivity random-objectstore$/{bluestore-comp-snappy} tasks/dashboard} 2
Failure Reason:

Command failed on smithi053 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7200091 2023-03-09 17:17:37 2023-03-12 09:41:19 2023-03-12 10:49:40 1:08:21 0:57:40 0:10:41 smithi main ubuntu 20.04 rados/singleton-nomsgr/{all/admin_socket_output mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} 1
Failure Reason:

Command failed on smithi062 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7200092 2023-03-09 17:17:38 2023-03-12 09:42:00 2023-03-12 10:40:34 0:58:34 0:47:14 0:11:20 smithi main ubuntu 20.04 rados/upgrade/parallel/{0-random-distro$/{ubuntu_20.04} 0-start 1-tasks mon_election/connectivity upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} 2
Failure Reason:

Command failed (workunit test cls/test_cls_rbd.sh) on smithi047 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=quincy TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_rbd.sh'

fail 7200093 2023-03-09 17:17:39 2023-03-12 09:42:40 2023-03-12 10:53:47 1:11:07 1:00:25 0:10:42 smithi main centos 8.stream rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/cache-agent-big} 2
Failure Reason:

Command failed on smithi073 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7200094 2023-03-09 17:17:41 2023-03-12 09:42:41 2023-03-12 10:12:12 0:29:31 0:18:10 0:11:21 smithi main centos 8.stream rados/cephadm/osds/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-ops/rm-zap-add} 2
Failure Reason:

reached maximum tries (120) after waiting for 120 seconds

fail 7200095 2023-03-09 17:17:42 2023-03-12 09:42:41 2023-03-12 10:50:48 1:08:07 0:57:33 0:10:34 smithi main ubuntu 20.04 rados/singleton/{all/recovery-preemption mon_election/connectivity msgr-failures/none msgr/async-v2only objectstore/bluestore-comp-zlib rados supported-random-distro$/{ubuntu_latest}} 1
Failure Reason:

Command failed on smithi105 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7200096 2023-03-09 17:17:43 2023-03-12 09:42:51 2023-03-12 10:55:19 1:12:28 1:00:37 0:11:51 smithi main centos 8.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-lz4 rados tasks/rados_cls_all validater/lockdep} 2
Failure Reason:

Command failed on smithi067 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7200097 2023-03-09 17:17:44 2023-03-12 09:44:22 2023-03-12 10:57:14 1:12:52 1:00:28 0:12:24 smithi main centos 8.stream rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/few objectstore/bluestore-comp-snappy rados recovery-overrides/{more-active-recovery} supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} 3
Failure Reason:

Command failed on smithi140 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7200098 2023-03-09 17:17:45 2023-03-12 09:45:53 2023-03-12 10:55:11 1:09:18 1:02:42 0:06:36 smithi main rhel 8.6 rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few objectstore/bluestore-comp-snappy rados recovery-overrides/{more-async-recovery} supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} 2
Failure Reason:

Command failed on smithi008 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7200099 2023-03-09 17:17:47 2023-03-12 09:45:53 2023-03-12 10:57:25 1:11:32 0:57:37 0:13:55 smithi main ubuntu 20.04 rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{ubuntu_latest} tasks/readwrite} 2
Failure Reason:

Command failed on smithi046 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7200100 2023-03-09 17:17:48 2023-03-12 09:48:44 2023-03-12 10:57:40 1:08:56 0:57:23 0:11:33 smithi main ubuntu 20.04 rados/perf/{ceph mon_election/classic objectstore/bluestore-comp openstack scheduler/dmclock_1Shard_16Threads settings/optimized ubuntu_latest workloads/radosbench_4M_write} 1
Failure Reason:

Command failed on smithi138 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

dead 7200101 2023-03-09 17:17:49 2023-03-12 09:49:44 2023-03-12 22:06:22 12:16:38 smithi main ubuntu 20.04 rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/osd-delay objectstore/filestore-xfs rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4
Failure Reason:

hit max job timeout

fail 7200102 2023-03-09 17:17:50 2023-03-12 09:55:46 2023-03-12 10:17:22 0:21:36 0:11:01 0:10:35 smithi main centos 8.stream rados/cephadm/smoke/{0-distro/centos_8.stream_container_tools 0-nvme-loop agent/off fixed-2 mon_election/classic start} 2
Failure Reason:

Command failed on smithi119 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:e1535bff13ef9f910f1d4cb360069ee00dc3b970 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 79e3ee40-c0be-11ed-9af2-001a4aab830c -- ceph orch daemon add osd smithi119:/dev/nvme4n1'

pass 7200103 2023-03-09 17:17:51 2023-03-12 09:56:26 2023-03-12 10:21:38 0:25:12 0:16:17 0:08:55 smithi main centos 8.stream rados/multimon/{clusters/9 mon_election/classic msgr-failures/few msgr/async-v1only no_pools objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_8} tasks/mon_recovery} 3
fail 7200104 2023-03-09 17:17:52 2023-03-12 09:56:27 2023-03-12 11:06:40 1:10:13 1:00:25 0:09:48 smithi main centos 8.stream rados/singleton-nomsgr/{all/balancer mon_election/classic rados supported-random-distro$/{centos_8}} 1
Failure Reason:

Command failed on smithi172 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7200105 2023-03-09 17:17:53 2023-03-12 09:57:07 2023-03-12 11:06:33 1:09:26 1:02:11 0:07:15 smithi main rhel 8.6 rados/singleton/{all/resolve_stuck_peering mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-zstd rados supported-random-distro$/{rhel_8}} 2
Failure Reason:

Command failed on smithi148 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7200106 2023-03-09 17:17:55 2023-03-12 09:57:07 2023-03-12 11:07:28 1:10:21 1:02:35 0:07:46 smithi main rhel 8.6 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{default} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{rhel_8} thrashers/default thrashosds-health workloads/cache-agent-small} 2
Failure Reason:

Command failed on smithi072 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7200107 2023-03-09 17:17:56 2023-03-12 09:57:48 2023-03-12 11:08:28 1:10:40 1:03:22 0:07:18 smithi main rhel 8.6 rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/osd-delay rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{rhel_8} thrashers/default thrashosds-health workloads/ec-small-objects-overwrites} 2
Failure Reason:

Command failed on smithi040 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

pass 7200108 2023-03-09 17:17:57 2023-03-12 09:59:18 2023-03-12 10:24:24 0:25:06 0:18:23 0:06:43 smithi main rhel 8.6 rados/objectstore/{backends/keyvaluedb supported-random-distro$/{rhel_8}} 1
fail 7200109 2023-03-09 17:17:58 2023-03-12 09:59:19 2023-03-12 10:20:01 0:20:42 0:11:06 0:09:36 smithi main centos 8.stream rados/cephadm/smoke-singlehost/{0-random-distro$/{centos_8.stream_container_tools} 1-start 2-services/basic 3-final} 1
Failure Reason:

Command failed on smithi035 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:e1535bff13ef9f910f1d4cb360069ee00dc3b970 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d0d91ca2-c0be-11ed-9af2-001a4aab830c -- ceph orch daemon add osd smithi035:vg_nvme/lv_4'

fail 7200110 2023-03-09 17:17:59 2023-03-12 09:59:29 2023-03-12 11:08:40 1:09:11 1:00:13 0:08:58 smithi main centos 8.stream rados/singleton-nomsgr/{all/cache-fs-trunc mon_election/connectivity rados supported-random-distro$/{centos_8}} 1
Failure Reason:

Command failed on smithi097 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

pass 7200111 2023-03-09 17:18:01 2023-03-12 09:59:49 2023-03-12 11:26:26 1:26:37 1:20:36 0:06:01 smithi main rhel 8.6 rados/standalone/{supported-random-distro$/{rhel_8} workloads/mon} 1
fail 7200112 2023-03-09 17:18:02 2023-03-12 09:59:50 2023-03-12 11:08:50 1:09:00 1:00:17 0:08:43 smithi main centos 8.stream rados/singleton/{all/test-crash mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-hybrid rados supported-random-distro$/{centos_8}} 1
Failure Reason:

Command failed on smithi121 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7200113 2023-03-09 17:18:03 2023-03-12 09:59:50 2023-03-12 11:08:57 1:09:07 1:02:52 0:06:15 smithi main rhel 8.6 rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/mon-delay msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{rhel_8} thrashers/force-sync-many workloads/pool-create-delete} 2
Failure Reason:

Command failed on smithi130 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7200114 2023-03-09 17:18:04 2023-03-12 10:00:01 2023-03-12 11:09:23 1:09:22 1:03:04 0:06:18 smithi main rhel 8.6 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-dispatch-delay msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{rhel_8} thrashers/mapgap thrashosds-health workloads/cache-pool-snaps-readproxy} 2
Failure Reason:

Command failed on smithi029 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7200115 2023-03-09 17:18:05 2023-03-12 10:00:01 2023-03-12 10:24:04 0:24:03 0:13:59 0:10:04 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools_crun agent/on mon_election/classic task/test_nfs} 1
Failure Reason:

Command failed on smithi063 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:e1535bff13ef9f910f1d4cb360069ee00dc3b970 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 93632736-c0bf-11ed-9af2-001a4aab830c -- ceph orch daemon add osd smithi063:vg_nvme/lv_4'

fail 7200116 2023-03-09 17:18:06 2023-03-12 10:01:01 2023-03-12 11:10:31 1:09:30 1:02:24 0:07:06 smithi main rhel 8.6 rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/osd-delay objectstore/bluestore-bitmap rados recovery-overrides/{default} supported-random-distro$/{rhel_8} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=clay-k=4-m=2} 2
Failure Reason:

Command failed on smithi045 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7200117 2023-03-09 17:18:07 2023-03-12 10:01:22 2023-03-12 11:10:00 1:08:38 0:57:32 0:11:06 smithi main ubuntu 20.04 rados/singleton/{all/test-noautoscale-flag mon_election/classic msgr-failures/none msgr/async-v2only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{ubuntu_latest}} 1
Failure Reason:

Command failed on smithi139 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7200118 2023-03-09 17:18:09 2023-03-12 10:02:02 2023-03-12 11:11:09 1:09:07 1:00:17 0:08:50 smithi main centos 8.stream rados/singleton-nomsgr/{all/ceph-kvstore-tool mon_election/classic rados supported-random-distro$/{centos_8}} 1
Failure Reason:

Command failed on smithi120 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7200119 2023-03-09 17:18:10 2023-03-12 10:02:13 2023-03-12 11:10:00 1:07:47 0:57:31 0:10:16 smithi main ubuntu 20.04 rados/perf/{ceph mon_election/connectivity objectstore/bluestore-low-osd-mem-target openstack scheduler/dmclock_default_shards settings/optimized ubuntu_latest workloads/radosbench_omap_write} 1
Failure Reason:

Command failed on smithi144 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7200120 2023-03-09 17:18:11 2023-03-12 10:02:13 2023-03-12 10:29:11 0:26:58 0:14:35 0:12:23 smithi main centos 8.stream rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/3-size-2-min-size 1-install/pacific backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/on mon_election/connectivity msgr-failures/fastclose rados thrashers/morepggrow thrashosds-health workloads/rbd_cls} 3
Failure Reason:

Command failed on smithi031 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:e1535bff13ef9f910f1d4cb360069ee00dc3b970 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f4f2d4d8-c0bf-11ed-9af2-001a4aab830c -- ceph orch daemon add osd smithi031:vg_nvme/lv_4'

fail 7200121 2023-03-09 17:18:12 2023-03-12 10:04:14 2023-03-12 11:12:03 1:07:49 0:59:50 0:07:59 smithi main rhel 8.6 rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{rhel_8} tasks/repair_test} 2
Failure Reason:

Command failed on smithi165 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7200122 2023-03-09 17:18:13 2023-03-12 10:04:44 2023-03-12 10:38:11 0:33:27 0:18:35 0:14:52 smithi main centos 8.stream rados/cephadm/osds/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop 1-start 2-ops/rm-zap-flag} 2
Failure Reason:

reached maximum tries (120) after waiting for 120 seconds

fail 7200123 2023-03-09 17:18:14 2023-03-12 10:08:05 2023-03-12 11:17:15 1:09:10 1:00:26 0:08:44 smithi main centos 8.stream rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_8} thrashers/morepggrow thrashosds-health workloads/cache-pool-snaps} 2
Failure Reason:

Command failed on smithi123 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7200124 2023-03-09 17:18:16 2023-03-12 10:08:05 2023-03-12 11:19:02 1:10:57 1:01:37 0:09:20 smithi main rhel 8.6 rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/disable mon_election/connectivity random-objectstore$/{bluestore-stupid} supported-random-distro$/{rhel_8} tasks/failover} 2
Failure Reason:

Command failed on smithi005 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7200125 2023-03-09 17:18:17 2023-03-12 10:09:56 2023-03-12 11:19:02 1:09:06 1:00:21 0:08:45 smithi main centos 8.stream rados/singleton/{all/test_envlibrados_for_rocksdb/{supported/centos_latest test_envlibrados_for_rocksdb} mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-stupid rados supported-random-distro$/{centos_8}} 1
Failure Reason:

Command failed on smithi129 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7200126 2023-03-09 17:18:18 2023-03-12 10:09:56 2023-03-12 11:30:13 1:20:17 1:09:58 0:10:19 smithi main centos 8.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-snappy rados tasks/mon_recovery validater/valgrind} 2
Failure Reason:

Command failed on smithi026 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

pass 7200127 2023-03-09 17:18:19 2023-03-12 10:11:07 2023-03-12 10:29:38 0:18:31 0:08:06 0:10:25 smithi main ubuntu 20.04 rados/singleton-nomsgr/{all/ceph-post-file mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} 1
fail 7200128 2023-03-09 17:18:20 2023-03-12 10:11:57 2023-03-12 10:32:55 0:20:58 0:11:28 0:09:30 smithi main centos 8.stream rados/cephadm/smoke/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} 2
Failure Reason:

Command failed on smithi016 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:e1535bff13ef9f910f1d4cb360069ee00dc3b970 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid a391a4ba-c0c0-11ed-9af2-001a4aab830c -- ceph orch daemon add osd smithi016:/dev/nvme4n1'

fail 7200129 2023-03-09 17:18:21 2023-03-12 10:12:08 2023-03-12 11:21:52 1:09:44 1:01:39 0:08:05 smithi main rhel 8.6 rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/osd-delay objectstore/bluestore-comp-zlib rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{rhel_8} thrashers/default thrashosds-health workloads/ec-rados-plugin=lrc-k=4-m=2-l=3} 3
Failure Reason:

Command failed on smithi088 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7200130 2023-03-09 17:18:23 2023-03-12 10:12:18 2023-03-12 11:21:47 1:09:29 1:00:19 0:09:10 smithi main centos 8.stream rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/osd-delay objectstore/bluestore-comp-zlib rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} 2
Failure Reason:

Command failed on smithi132 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7200131 2023-03-09 17:18:24 2023-03-12 10:12:39 2023-03-12 11:25:25 1:12:46 0:57:34 0:15:12 smithi main ubuntu 20.04 rados/singleton-bluestore/{all/cephtool mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{ubuntu_latest}} 1
Failure Reason:

Command failed on smithi027 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7200132 2023-03-09 17:18:25 2023-03-12 10:13:49 2023-03-12 11:26:41 1:12:52 0:58:05 0:14:47 smithi main ubuntu 20.04 rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-bitmap rados recovery-overrides/{more-active-recovery} supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4
Failure Reason:

Command failed on smithi114 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

pass 7200133 2023-03-09 17:18:26 2023-03-12 10:18:40 2023-03-12 10:41:07 0:22:27 0:11:43 0:10:44 smithi main centos 8.stream rados/multimon/{clusters/21 mon_election/connectivity msgr-failures/many msgr/async-v2only no_pools objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8} tasks/mon_clock_no_skews} 3
fail 7200134 2023-03-09 17:18:27 2023-03-12 10:20:01 2023-03-12 11:29:19 1:09:18 1:02:07 0:07:11 smithi main rhel 8.6 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{default} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{rhel_8} thrashers/none thrashosds-health workloads/cache-snaps-balanced} 2
Failure Reason:

Command failed on smithi006 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

dead 7200135 2023-03-09 17:18:28 2023-03-12 10:20:31 2023-03-12 22:29:53 12:09:22 smithi main rhel 8.6 rados/singleton/{all/thrash-backfill-full mon_election/classic msgr-failures/many msgr/async-v1only objectstore/filestore-xfs rados supported-random-distro$/{rhel_8}} 2
Failure Reason:

hit max job timeout

fail 7200136 2023-03-09 17:18:30 2023-03-12 10:21:02 2023-03-12 10:43:11 0:22:09 0:14:40 0:07:29 smithi main rhel 8.6 rados/cephadm/workunits/{0-distro/rhel_8.6_container_tools_3.0 agent/off mon_election/connectivity task/test_orch_cli} 1
Failure Reason:

Command failed on smithi186 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:e1535bff13ef9f910f1d4cb360069ee00dc3b970 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 2c20e128-c0c2-11ed-9af2-001a4aab830c -- ceph orch daemon add osd smithi186:vg_nvme/lv_4'

fail 7200137 2023-03-09 17:18:31 2023-03-12 10:21:42 2023-03-12 11:28:51 1:07:09 1:01:29 0:05:40 smithi main rhel 8.6 rados/objectstore/{backends/objectcacher-stress supported-random-distro$/{rhel_8}} 1
Failure Reason:

Command failed on smithi133 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7200138 2023-03-09 17:18:32 2023-03-12 10:21:43 2023-03-12 11:30:01 1:08:18 0:57:37 0:10:41 smithi main ubuntu 20.04 rados/singleton-nomsgr/{all/crushdiff mon_election/classic rados supported-random-distro$/{ubuntu_latest}} 1
Failure Reason:

Command failed on smithi057 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7200139 2023-03-09 17:18:33 2023-03-12 10:21:43 2023-03-12 11:30:17 1:08:34 0:57:33 0:11:01 smithi main ubuntu 20.04 rados/perf/{ceph mon_election/classic objectstore/bluestore-stupid openstack scheduler/wpq_default_shards settings/optimized ubuntu_latest workloads/sample_fio} 1
Failure Reason:

Command failed on smithi081 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7200140 2023-03-09 17:18:34 2023-03-12 10:22:33 2023-03-12 10:51:20 0:28:47 0:19:41 0:09:06 smithi main rhel 8.6 rados/cephadm/osds/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop 1-start 2-ops/rm-zap-wait} 2
Failure Reason:

reached maximum tries (120) after waiting for 120 seconds

fail 7200141 2023-03-09 17:18:35 2023-03-12 10:24:14 2023-03-12 11:34:55 1:10:41 1:01:33 0:09:08 smithi main rhel 8.6 rados/singleton/{all/thrash-eio mon_election/connectivity msgr-failures/none msgr/async-v2only objectstore/bluestore-bitmap rados supported-random-distro$/{rhel_8}} 2
Failure Reason:

Command failed on smithi101 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7200142 2023-03-09 17:18:37 2023-03-12 10:25:34 2023-03-12 11:34:50 1:09:16 1:01:53 0:07:23 smithi main rhel 8.6 rados/monthrash/{ceph clusters/9-mons mon_election/connectivity msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{rhel_8} thrashers/many workloads/rados_5925} 2
Failure Reason:

Command failed on smithi019 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

dead 7200143 2023-03-09 17:18:38 2023-03-12 10:26:05 2023-03-12 22:37:43 12:11:38 smithi main ubuntu 20.04 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{default} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async objectstore/filestore-xfs rados supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/cache-snaps} 2
Failure Reason:

hit max job timeout

fail 7200144 2023-03-09 17:18:39 2023-03-12 10:28:06 2023-03-12 11:36:10 1:08:04 0:57:34 0:10:30 smithi main ubuntu 20.04 rados/singleton-nomsgr/{all/export-after-evict mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} 1
Failure Reason:

Command failed on smithi107 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7200145 2023-03-09 17:18:40 2023-03-12 10:28:06 2023-03-12 10:51:28 0:23:22 0:13:27 0:09:55 smithi main centos 8.stream rados/standalone/{supported-random-distro$/{centos_8} workloads/osd-backfill} 1
Failure Reason:

Command failed (workunit test osd-backfill/osd-backfill-prio.sh) on smithi043 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=e1535bff13ef9f910f1d4cb360069ee00dc3b970 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/osd-backfill/osd-backfill-prio.sh'

fail 7200146 2023-03-09 17:18:41 2023-03-12 10:28:46 2023-03-12 11:47:38 1:18:52 1:09:45 0:09:07 smithi main centos 8.stream rados/valgrind-leaks/{1-start 2-inject-leak/osd centos_latest} 1
Failure Reason:

expected valgrind issues and found none

fail 7200147 2023-03-09 17:18:43 2023-03-12 10:28:47 2023-03-12 11:38:08 1:09:21 0:59:32 0:09:49 smithi main centos 8.stream rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{centos_8} tasks/scrub_test} 2
Failure Reason:

Command failed on smithi031 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7200148 2023-03-09 17:18:44 2023-03-12 10:29:17 2023-03-12 11:37:06 1:07:49 0:57:34 0:10:15 smithi main ubuntu 20.04 rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-comp-lz4 rados recovery-overrides/{default} supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=jerasure-k=2-m=1} 2
Failure Reason:

Command failed on smithi174 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7200149 2023-03-09 17:18:45 2023-03-12 10:29:17 2023-03-12 10:48:39 0:19:22 0:11:25 0:07:57 smithi main rhel 8.6 rados/cephadm/smoke/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop agent/off fixed-2 mon_election/classic start} 2
Failure Reason:

Command failed on smithi052 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:e1535bff13ef9f910f1d4cb360069ee00dc3b970 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f36e10ca-c0c2-11ed-9af2-001a4aab830c -- ceph orch daemon add osd smithi052:/dev/nvme4n1'

fail 7200150 2023-03-09 17:18:46 2023-03-12 10:31:58 2023-03-12 11:41:45 1:09:47 1:02:06 0:07:41 smithi main rhel 8.6 rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/osd-dispatch-delay rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{rhel_8} thrashers/fastread thrashosds-health workloads/ec-snaps-few-objects-overwrites} 2
Failure Reason:

Command failed on smithi016 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7200151 2023-03-09 17:18:47 2023-03-12 10:32:59 2023-03-12 11:46:08 1:13:09 1:00:23 0:12:46 smithi main centos 8.stream rados/singleton/{all/thrash-rados/{thrash-rados thrashosds-health} mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8}} 2
Failure Reason:

Command failed on smithi161 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7200152 2023-03-09 17:18:48 2023-03-12 10:37:00 2023-03-12 11:47:05 1:10:05 1:02:04 0:08:01 smithi main rhel 8.6 rados/singleton-nomsgr/{all/full-tiering mon_election/classic rados supported-random-distro$/{rhel_8}} 1
Failure Reason:

Command failed on smithi038 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7200153 2023-03-09 17:18:49 2023-03-12 10:38:20 2023-03-12 11:47:09 1:08:49 1:02:07 0:06:42 smithi main rhel 8.6 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-dispatch-delay msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/cache} 2
Failure Reason:

Command failed on smithi017 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7200154 2023-03-09 17:18:51 2023-03-12 10:38:21 2023-03-12 11:06:29 0:28:08 0:17:34 0:10:34 smithi main rhel 8.6 rados/cephadm/workunits/{0-distro/rhel_8.6_container_tools_rhel8 agent/on mon_election/classic task/test_orch_cli_mon} 5
Failure Reason:

Command failed on smithi012 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:e1535bff13ef9f910f1d4cb360069ee00dc3b970 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 065d674c-c0c5-11ed-9af2-001a4aab830c -- ceph orch daemon add osd smithi012:vg_nvme/lv_4'

fail 7200155 2023-03-09 17:18:52 2023-03-12 10:40:21 2023-03-12 11:49:25 1:09:04 1:01:38 0:07:26 smithi main rhel 8.6 rados/singleton/{all/thrash_cache_writeback_proxy_none mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-snappy rados supported-random-distro$/{rhel_8}} 2
Failure Reason:

Command failed on smithi086 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7200156 2023-03-09 17:18:53 2023-03-12 10:40:22 2023-03-12 11:49:35 1:09:13 1:00:33 0:08:40 smithi main centos 8.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-zlib rados tasks/rados_api_tests validater/lockdep} 2
Failure Reason:

Command failed on smithi047 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7200157 2023-03-09 17:18:54 2023-03-12 10:40:42 2023-03-12 11:50:35 1:09:53 1:00:26 0:09:27 smithi main centos 8.stream rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/enable mon_election/classic random-objectstore$/{bluestore-comp-zstd} supported-random-distro$/{centos_8} tasks/insights} 2
Failure Reason:

Command failed on smithi099 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7200158 2023-03-09 17:18:55 2023-03-12 10:41:13 2023-03-12 11:50:54 1:09:41 1:01:53 0:07:48 smithi main rhel 8.6 rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/osd-dispatch-delay objectstore/bluestore-comp-zstd rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{rhel_8} thrashers/fastread thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} 3
Failure Reason:

Command failed on smithi110 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7200159 2023-03-09 17:18:56 2023-03-12 10:41:43 2023-03-12 11:49:56 1:08:13 0:57:47 0:10:26 smithi main ubuntu 20.04 rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/osd-dispatch-delay objectstore/bluestore-comp-zstd rados recovery-overrides/{more-async-recovery} supported-random-distro$/{ubuntu_latest} thrashers/mapgap thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} 2
Failure Reason:

Command failed on smithi033 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7200160 2023-03-09 17:18:57 2023-03-12 10:42:14 2023-03-12 11:54:11 1:11:57 0:58:07 0:13:50 smithi main ubuntu 20.04 rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/fastclose objectstore/bluestore-comp-lz4 rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4
Failure Reason:

Command failed on smithi002 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

pass 7200161 2023-03-09 17:18:59 2023-03-12 10:45:55 2023-03-12 11:07:32 0:21:37 0:11:10 0:10:27 smithi main ubuntu 20.04 rados/cephadm/workunits/{0-distro/ubuntu_20.04 agent/off mon_election/connectivity task/test_adoption} 1
fail 7200162 2023-03-09 17:19:00 2023-03-12 10:45:55 2023-03-12 11:56:05 1:10:10 1:00:17 0:09:53 smithi main centos 8.stream rados/singleton-nomsgr/{all/health-warnings mon_election/connectivity rados supported-random-distro$/{centos_8}} 1
Failure Reason:

Command failed on smithi178 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7200163 2023-03-09 17:19:01 2023-03-12 10:46:45 2023-03-12 11:56:03 1:09:18 0:57:33 0:11:45 smithi main ubuntu 20.04 rados/perf/{ceph mon_election/connectivity objectstore/bluestore-basic-min-osd-mem-target openstack scheduler/dmclock_1Shard_16Threads settings/optimized ubuntu_latest workloads/sample_radosbench} 1
Failure Reason:

Command failed on smithi007 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

pass 7200164 2023-03-09 17:19:02 2023-03-12 10:48:16 2023-03-12 11:09:39 0:21:23 0:11:13 0:10:10 smithi main centos 8.stream rados/multimon/{clusters/3 mon_election/classic msgr-failures/few msgr/async no_pools objectstore/bluestore-hybrid rados supported-random-distro$/{centos_8} tasks/mon_clock_with_skews} 2
fail 7200165 2023-03-09 17:19:03 2023-03-12 10:48:46 2023-03-12 11:58:00 1:09:14 0:57:44 0:11:30 smithi main ubuntu 20.04 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/dedup-io-mixed} 2
Failure Reason:

Command failed on smithi151 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

pass 7200166 2023-03-09 17:19:05 2023-03-12 10:49:47 2023-03-12 13:34:55 2:45:08 2:23:01 0:22:07 smithi main centos 8.stream rados/objectstore/{backends/objectstore-bluestore-a supported-random-distro$/{centos_8}} 1
fail 7200167 2023-03-09 17:19:06 2023-03-12 10:49:47 2023-03-12 11:16:30 0:26:43 0:14:22 0:12:21 smithi main centos 8.stream rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/2-size-2-min-size 1-install/quincy backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/crush-compat mon_election/classic msgr-failures/few rados thrashers/none thrashosds-health workloads/snaps-few-objects} 3
Failure Reason:

Command failed on smithi063 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:e1535bff13ef9f910f1d4cb360069ee00dc3b970 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 95ab20f0-c0c6-11ed-9af2-001a4aab830c -- ceph orch daemon add osd smithi063:vg_nvme/lv_4'

fail 7200168 2023-03-09 17:19:07 2023-03-12 10:51:28 2023-03-12 12:00:23 1:08:55 1:01:44 0:07:11 smithi main rhel 8.6 rados/singleton/{all/watch-notify-same-primary mon_election/classic msgr-failures/none msgr/async-v2only objectstore/bluestore-comp-zlib rados supported-random-distro$/{rhel_8}} 1
Failure Reason:

Command failed on smithi043 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7200169 2023-03-09 17:19:08 2023-03-12 10:51:38 2023-03-12 11:17:09 0:25:31 0:19:12 0:06:19 smithi main rhel 8.6 rados/cephadm/osds/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-ops/rmdir-reactivate} 2
Failure Reason:

reached maximum tries (120) after waiting for 120 seconds

fail 7200170 2023-03-09 17:19:09 2023-03-12 10:52:19 2023-03-12 12:01:50 1:09:31 0:57:52 0:11:39 smithi main ubuntu 20.04 rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{ubuntu_latest} tasks/libcephsqlite} 2
Failure Reason:

Command failed on smithi073 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7200171 2023-03-09 17:19:10 2023-03-12 10:53:49 2023-03-12 11:13:57 0:20:08 0:06:11 0:13:57 smithi main ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/none cluster/1-node k8s/1.21 net/flannel rook/master} 1
Failure Reason:

Command failed on smithi049 with status 1: 'sudo systemctl enable --now kubelet && sudo kubeadm config images pull'

fail 7200172 2023-03-09 17:19:12 2023-03-12 10:54:10 2023-03-12 11:18:11 0:24:01 0:13:57 0:10:04 smithi main centos 8.stream rados/dashboard/{0-single-container-host debug/mgr mon_election/classic random-objectstore$/{bluestore-stupid} tasks/e2e} 2
Failure Reason:

Command failed on smithi008 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:e1535bff13ef9f910f1d4cb360069ee00dc3b970 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 2c5efe7c-c0c7-11ed-9af2-001a4aab830c -- ceph orch daemon add osd smithi008:vg_nvme/lv_4'

fail 7200173 2023-03-09 17:19:13 2023-03-12 10:55:20 2023-03-12 12:03:00 1:07:40 0:57:33 0:10:07 smithi main ubuntu 20.04 rados/singleton-nomsgr/{all/large-omap-object-warnings mon_election/classic rados supported-random-distro$/{ubuntu_latest}} 1
Failure Reason:

Command failed on smithi078 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7200174 2023-03-09 17:19:14 2023-03-12 10:55:21 2023-03-12 12:06:30 1:11:09 1:00:28 0:10:41 smithi main centos 8.stream rados/singleton/{all/test_envlibrados_for_rocksdb/{supported/rhel_latest test_envlibrados_for_rocksdb} mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8}} 1
Failure Reason:

Command failed on smithi067 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7200175 2023-03-09 17:19:15 2023-03-12 10:55:21 2023-03-12 12:04:22 1:09:01 1:01:26 0:07:35 smithi main rhel 8.6 rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/mon-delay msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{rhel_8} thrashers/one workloads/rados_api_tests} 2
Failure Reason:

Command failed on smithi191 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

dead 7200176 2023-03-09 17:19:16 2023-03-12 10:57:22 2023-03-12 11:16:56 0:19:34 smithi main rhel 8.6 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{default} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{rhel_8} thrashers/mapgap thrashosds-health workloads/dedup-io-snaps} 2
Failure Reason:

Error reimaging machines: reached maximum tries (100) after waiting for 600 seconds

fail 7200177 2023-03-09 17:19:17 2023-03-12 10:57:32 2023-03-12 11:14:39 0:17:07 0:11:24 0:05:43 smithi main rhel 8.6 rados/cephadm/smoke/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} 2
Failure Reason:

Command failed on smithi046 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:e1535bff13ef9f910f1d4cb360069ee00dc3b970 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid a0a7a3a2-c0c6-11ed-9af2-001a4aab830c -- ceph orch daemon add osd smithi046:/dev/nvme4n1'

fail 7200178 2023-03-09 17:19:19 2023-03-12 10:57:42 2023-03-12 12:15:37 1:17:55 1:01:37 0:16:18 smithi main rhel 8.6 rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/fastclose objectstore/bluestore-comp-snappy rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=jerasure-k=3-m=1} 2
Failure Reason:

Command failed on smithi012 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7200179 2023-03-09 17:19:20 2023-03-12 11:06:34 2023-03-12 12:14:48 1:08:14 0:57:20 0:10:54 smithi main ubuntu 20.04 rados/singleton-nomsgr/{all/lazy_omap_stats_output mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} 1
Failure Reason:

Command failed on smithi103 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap'

fail 7200180 2023-03-09 17:19:21 2023-03-12 11:06:34 2023-03-12 11:31:03 0:24:29 0:17:54 0:06:35 smithi main rhel 8.6 rados/standalone/{supported-random-distro$/{rhel_8} workloads/osd} 1
Failure Reason:

Command failed (workunit test osd/divergent-priors.sh) on smithi100 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=e1535bff13ef9f910f1d4cb360069ee00dc3b970 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/osd/divergent-priors.sh'