User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
benhanokh | 2023-03-28 22:46:34 | 2023-03-29 22:37:50 | 2023-03-31 01:57:21 | 1 day, 3:19:31 | rados | WIP_GBH_snap_mapper_B | smithi | e369dd5 | 195 | 38 | 65 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
pass | 7224205 | 2023-03-28 22:47:39 | 2023-03-29 09:38:32 | 2023-03-29 10:00:41 | 0:22:09 | 0:13:48 | 0:08:21 | smithi | main | rhel | 8.6 | rados/singleton-nomsgr/{all/full-tiering mon_election/classic rados supported-random-distro$/{rhel_8}} | 1 | |
dead | 7224210 | 2023-03-28 22:47:40 | 2023-03-29 09:38:32 | 2023-03-29 21:49:40 | 12:11:08 | smithi | main | ubuntu | 20.04 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/fastclose rados recovery-overrides/{default} supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/ec-pool-snaps-few-objects-overwrites} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 7224213 | 2023-03-28 22:47:41 | 2023-03-29 09:38:33 | 2023-03-29 10:02:46 | 0:24:13 | 0:16:42 | 0:07:31 | smithi | main | rhel | 8.6 | rados/cephadm/osds/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop 1-start 2-ops/rm-zap-add} | 2 | |
pass | 7224217 | 2023-03-28 22:47:42 | 2023-03-29 09:38:33 | 2023-03-29 10:09:43 | 0:31:10 | 0:19:47 | 0:11:23 | smithi | main | centos | 8.stream | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{centos_8} tasks/repair_test} | 2 | |
pass | 7224221 | 2023-03-28 22:47:44 | 2023-03-29 09:38:33 | 2023-03-29 09:57:35 | 0:19:02 | 0:10:42 | 0:08:20 | smithi | main | centos | 8.stream | rados/objectstore/{backends/objectstore-bluestore-b supported-random-distro$/{centos_8}} | 1 | |
pass | 7224224 | 2023-03-28 22:47:45 | 2023-03-29 09:38:34 | 2023-03-29 10:12:38 | 0:34:04 | 0:27:23 | 0:06:41 | smithi | main | rhel | 8.6 | rados/singleton/{all/random-eio mon_election/connectivity msgr-failures/none msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{rhel_8}} | 2 | |
pass | 7224229 | 2023-03-28 22:47:46 | 2023-03-29 09:38:34 | 2023-03-29 10:06:28 | 0:27:54 | 0:15:29 | 0:12:25 | smithi | main | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-dispatch-delay msgr/async-v1only objectstore/bluestore-hybrid rados supported-random-distro$/{ubuntu_latest} thrashers/morepggrow thrashosds-health workloads/cache} | 2 | |
pass | 7224232 | 2023-03-28 22:47:47 | 2023-03-29 09:38:35 | 2023-03-29 10:00:53 | 0:22:18 | 0:14:38 | 0:07:40 | smithi | main | rhel | 8.6 | rados/singleton-nomsgr/{all/health-warnings mon_election/connectivity rados supported-random-distro$/{rhel_8}} | 1 | |
pass | 7224236 | 2023-03-28 22:47:48 | 2023-03-29 09:38:35 | 2023-03-29 10:15:06 | 0:36:31 | 0:31:11 | 0:05:20 | smithi | main | rhel | 8.6 | rados/cephadm/workunits/{0-distro/rhel_8.6_container_tools_3.0 agent/on mon_election/classic task/test_nfs} | 1 | |
pass | 7224240 | 2023-03-28 22:47:50 | 2023-03-29 09:38:35 | 2023-03-29 10:05:39 | 0:27:04 | 0:18:40 | 0:08:24 | smithi | main | centos | 8.stream | rados/singleton/{all/rebuild-mondb mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{centos_8}} | 1 | |
pass | 7224243 | 2023-03-28 22:47:51 | 2023-03-29 09:59:19 | 621 | smithi | main | ubuntu | 20.04 | rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/mon-delay msgr/async objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{ubuntu_latest} thrashers/one workloads/rados_5925} | 2 | ||||
fail | 7224248 | 2023-03-28 22:47:52 | 2023-03-29 09:38:36 | 2023-03-29 16:06:00 | 6:27:24 | 6:14:57 | 0:12:27 | smithi | main | centos | 8.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-snappy rados tasks/rados_api_tests validater/lockdep} | 2 | |
Failure Reason:
Command failed (workunit test rados/test.sh) on smithi148 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=e369dd579b62fff69e4b88c7d4fb7419fe60653c TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 ALLOW_TIMEOUTS=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh' |
||||||||||||||
dead | 7224251 | 2023-03-28 22:47:53 | 2023-03-29 09:38:37 | 2023-03-29 22:08:23 | 12:29:46 | smithi | main | centos | 8.stream | rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/3-size-2-min-size 1-install/quincy backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/on mon_election/connectivity msgr-failures/fastclose rados thrashers/morepggrow thrashosds-health workloads/radosbench} | 3 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 7224255 | 2023-03-28 22:47:54 | 2023-03-29 09:39:27 | 2023-03-29 10:06:15 | 0:26:48 | 0:13:58 | 0:12:50 | smithi | main | centos | 8.stream | rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/enable mon_election/connectivity random-objectstore$/{bluestore-comp-snappy} supported-random-distro$/{centos_8} tasks/crash} | 2 | |
fail | 7224259 | 2023-03-28 22:47:56 | 2023-03-29 09:39:48 | 2023-03-29 09:59:36 | 0:19:48 | 0:06:43 | 0:13:05 | smithi | main | ubuntu | 20.04 | rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/none cluster/3-node k8s/1.21 net/calico rook/master} | 3 | |
Failure Reason:
Command failed on smithi073 with status 1: 'sudo systemctl enable --now kubelet && sudo kubeadm config images pull' |
||||||||||||||
dead | 7224263 | 2023-03-28 22:47:57 | 2023-03-29 09:41:19 | 2023-03-29 21:50:51 | 12:09:32 | smithi | main | ubuntu | 20.04 | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few objectstore/bluestore-bitmap rados recovery-overrides/{default} supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
dead | 7224266 | 2023-03-28 22:47:58 | 2023-03-29 09:41:29 | 2023-03-29 21:54:14 | 12:12:45 | smithi | main | rhel | 8.6 | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/osd-delay objectstore/bluestore-bitmap rados recovery-overrides/{more-active-recovery} supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 7224270 | 2023-03-28 22:47:59 | 2023-03-29 09:43:00 | 2023-03-29 10:08:32 | 0:25:32 | 0:18:03 | 0:07:29 | smithi | main | rhel | 8.6 | rados/cephadm/workunits/{0-distro/rhel_8.6_container_tools_rhel8 agent/off mon_election/connectivity task/test_orch_cli} | 1 | |
fail | 7224274 | 2023-03-28 22:48:00 | 2023-03-29 09:43:20 | 2023-03-29 10:21:36 | 0:38:16 | 0:27:15 | 0:11:01 | smithi | main | centos | 8.stream | rados/dashboard/{0-single-container-host debug/mgr mon_election/classic random-objectstore$/{bluestore-comp-lz4} tasks/e2e} | 2 | |
Failure Reason:
Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi006 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=e369dd579b62fff69e4b88c7d4fb7419fe60653c TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh' |
||||||||||||||
pass | 7224278 | 2023-03-28 22:48:02 | 2023-03-29 09:44:31 | 2023-03-29 10:06:51 | 0:22:20 | 0:14:24 | 0:07:56 | smithi | main | rhel | 8.6 | rados/singleton-nomsgr/{all/large-omap-object-warnings mon_election/classic rados supported-random-distro$/{rhel_8}} | 1 | |
pass | 7224281 | 2023-03-28 22:48:03 | 2023-03-29 09:44:31 | 2023-03-29 10:07:02 | 0:22:31 | 0:11:34 | 0:10:57 | smithi | main | ubuntu | 20.04 | rados/perf/{ceph mon_election/connectivity objectstore/bluestore-bitmap openstack scheduler/dmclock_default_shards settings/optimized ubuntu_latest workloads/sample_fio} | 1 | |
pass | 7224285 | 2023-03-28 22:48:04 | 2023-03-29 09:44:31 | 2023-03-29 10:06:40 | 0:22:09 | 0:11:29 | 0:10:40 | smithi | main | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{default} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/fastclose msgr/async-v2only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{ubuntu_latest} thrashers/none thrashosds-health workloads/dedup-io-mixed} | 2 | |
dead | 7224289 | 2023-03-28 22:48:05 | 2023-03-29 09:44:32 | 2023-03-29 21:54:17 | 12:09:45 | smithi | main | rhel | 8.6 | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-hybrid rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{rhel_8} thrashers/default thrashosds-health workloads/ec-small-objects-fast-read} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 7224293 | 2023-03-28 22:48:06 | 2023-03-29 09:45:22 | 2023-03-29 10:19:38 | 0:34:16 | 0:24:48 | 0:09:28 | smithi | main | ubuntu | 20.04 | rados/singleton/{all/recovery-preemption mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 7224297 | 2023-03-28 22:48:08 | 2023-03-29 09:45:43 | 2023-03-29 10:11:16 | 0:25:33 | 0:16:10 | 0:09:23 | smithi | main | centos | 8.stream | rados/cephadm/smoke/{0-distro/centos_8.stream_container_tools 0-nvme-loop agent/on fixed-2 mon_election/classic start} | 2 | |
pass | 7224301 | 2023-03-28 22:48:09 | 2023-03-29 09:45:53 | 2023-03-29 10:10:02 | 0:24:09 | 0:13:26 | 0:10:43 | smithi | main | centos | 8.stream | rados/singleton-nomsgr/{all/lazy_omap_stats_output mon_election/connectivity rados supported-random-distro$/{centos_8}} | 1 | |
fail | 7224304 | 2023-03-28 22:48:10 | 2023-03-29 09:46:53 | 2023-03-29 15:37:15 | 5:50:22 | 5:40:20 | 0:10:02 | smithi | main | ubuntu | 20.04 | rados/standalone/{supported-random-distro$/{ubuntu_latest} workloads/osd} | 1 | |
Failure Reason:
Command failed (workunit test osd/divergent-priors.sh) on smithi144 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=e369dd579b62fff69e4b88c7d4fb7419fe60653c TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/osd/divergent-priors.sh' |
||||||||||||||
fail | 7224308 | 2023-03-28 22:48:11 | 2023-03-29 09:46:54 | 2023-03-29 10:10:03 | 0:23:09 | 0:12:43 | 0:10:26 | smithi | main | centos | 8.stream | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8} tasks/scrub_test} | 2 | |
dead | 7224312 | 2023-03-28 22:48:13 | 2023-03-29 09:46:54 | 2023-03-29 21:58:10 | 12:11:16 | smithi | main | ubuntu | 20.04 | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/fastclose objectstore/bluestore-low-osd-mem-target rados recovery-overrides/{default} supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} | 3 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 7224316 | 2023-03-28 22:48:14 | 2023-03-29 09:47:45 | 2023-03-29 10:09:43 | 0:21:58 | 0:12:17 | 0:09:41 | smithi | main | centos | 8.stream | rados/singleton/{all/resolve_stuck_peering mon_election/classic msgr-failures/none msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{centos_8}} | 2 | |
dead | 7224320 | 2023-03-28 22:48:15 | 2023-03-29 09:48:35 | 2023-03-29 21:57:39 | 12:09:04 | smithi | main | centos | 8.stream | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-stupid rados supported-random-distro$/{centos_8} thrashers/pggrow thrashosds-health workloads/dedup-io-snaps} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 7224324 | 2023-03-28 22:48:16 | 2023-03-29 09:48:46 | 2023-03-29 10:18:52 | 0:30:06 | 0:19:02 | 0:11:04 | smithi | main | ubuntu | 20.04 | rados/cephadm/smoke-singlehost/{0-random-distro$/{ubuntu_20.04} 1-start 2-services/rgw 3-final} | 1 | |
pass | 7224328 | 2023-03-28 22:48:17 | 2023-03-29 10:16:46 | 1083 | smithi | main | centos | 8.stream | rados/multimon/{clusters/9 mon_election/classic msgr-failures/few msgr/async no_pools objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_8} tasks/mon_recovery} | 3 | ||||
pass | 7224331 | 2023-03-28 22:48:19 | 2023-03-29 09:49:37 | 2023-03-29 10:14:14 | 0:24:37 | 0:13:18 | 0:11:19 | smithi | main | ubuntu | 20.04 | rados/singleton-nomsgr/{all/librados_hello_world mon_election/classic rados supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 7224335 | 2023-03-28 22:48:20 | 2023-03-29 09:49:57 | 2023-03-29 10:11:15 | 0:21:18 | 0:15:33 | 0:05:45 | smithi | main | rhel | 8.6 | rados/cephadm/osds/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-ops/rm-zap-flag} | 2 | |
pass | 7224339 | 2023-03-28 22:48:21 | 2023-03-29 09:50:07 | 2023-03-29 10:13:22 | 0:23:15 | 0:13:57 | 0:09:18 | smithi | main | centos | 8.stream | rados/singleton/{all/test-crash mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-bitmap rados supported-random-distro$/{centos_8}} | 1 | |
pass | 7224342 | 2023-03-28 22:48:22 | 2023-03-29 09:50:38 | 2023-03-29 10:10:37 | 0:19:59 | 0:09:11 | 0:10:48 | smithi | main | ubuntu | 20.04 | rados/perf/{ceph mon_election/classic objectstore/bluestore-comp openstack scheduler/wpq_default_shards settings/optimized ubuntu_latest workloads/sample_radosbench} | 1 | |
dead | 7224347 | 2023-03-28 22:48:24 | 2023-03-29 09:50:38 | 2023-03-29 22:00:44 | 12:10:06 | smithi | main | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{default} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-delay msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/pool-snaps-few-objects} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 7224350 | 2023-03-28 22:48:25 | 2023-03-29 09:50:49 | 2023-03-29 10:22:32 | 0:31:43 | 0:18:52 | 0:12:51 | smithi | main | ubuntu | 20.04 | rados/singleton-nomsgr/{all/msgr mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} | 1 | |
dead | 7224354 | 2023-03-28 22:48:26 | 2023-03-29 09:50:49 | 2023-03-29 22:01:07 | 12:10:18 | smithi | main | ubuntu | 20.04 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/few rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/fastread thrashosds-health workloads/ec-small-objects-fast-read-overwrites} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 7224358 | 2023-03-28 22:48:27 | 2023-03-29 09:50:49 | 2023-03-29 10:40:52 | 0:50:03 | 0:37:39 | 0:12:24 | smithi | main | ubuntu | 20.04 | rados/cephadm/workunits/{0-distro/ubuntu_20.04 agent/on mon_election/classic task/test_orch_cli_mon} | 5 | |
pass | 7224361 | 2023-03-28 22:48:28 | 2023-03-29 09:52:10 | 2023-03-29 10:11:57 | 0:19:47 | 0:09:21 | 0:10:26 | smithi | main | ubuntu | 20.04 | rados/singleton/{all/test-noautoscale-flag mon_election/classic msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{ubuntu_latest}} | 1 | |
fail | 7224366 | 2023-03-28 22:48:30 | 2023-03-29 09:52:11 | 2023-03-29 13:19:01 | 3:26:50 | 3:14:37 | 0:12:13 | smithi | main | ubuntu | 20.04 | rados/monthrash/{ceph clusters/9-mons mon_election/connectivity msgr-failures/few msgr/async-v1only objectstore/bluestore-stupid rados supported-random-distro$/{ubuntu_latest} thrashers/sync-many workloads/rados_api_tests} | 2 | |
Failure Reason:
Command failed (workunit test rados/test.sh) on smithi149 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=e369dd579b62fff69e4b88c7d4fb7419fe60653c TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh' |
||||||||||||||
pass | 7224369 | 2023-03-28 22:48:31 | 2023-03-29 09:52:31 | 2023-03-29 10:20:04 | 0:27:33 | 0:15:35 | 0:11:58 | smithi | main | centos | 8.stream | rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/disable mon_election/classic random-objectstore$/{bluestore-hybrid} supported-random-distro$/{centos_8} tasks/failover} | 2 | |
fail | 7224374 | 2023-03-28 22:48:32 | 2023-03-29 09:55:12 | 2023-03-29 13:37:50 | 3:42:38 | 3:29:24 | 0:13:14 | smithi | main | centos | 8.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zlib rados tasks/rados_cls_all validater/valgrind} | 2 | |
Failure Reason:
Command failed (workunit test cls/test_cls_2pc_queue.sh) on smithi026 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=e369dd579b62fff69e4b88c7d4fb7419fe60653c TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_2pc_queue.sh' |
||||||||||||||
pass | 7224377 | 2023-03-28 22:48:33 | 2023-03-29 09:57:43 | 2023-03-29 10:17:15 | 0:19:32 | 0:08:25 | 0:11:07 | smithi | main | ubuntu | 20.04 | rados/objectstore/{backends/objectstore-memstore supported-random-distro$/{ubuntu_latest}} | 1 | |
dead | 7224382 | 2023-03-28 22:48:35 | 2023-03-29 09:59:23 | 2023-03-29 22:08:03 | 12:08:40 | smithi | main | rhel | 8.6 | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/osd-delay objectstore/bluestore-comp-lz4 rados recovery-overrides/{more-async-recovery} supported-random-distro$/{rhel_8} thrashers/default thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
dead | 7224385 | 2023-03-28 22:48:36 | 2023-03-29 09:59:34 | 2023-03-29 22:08:16 | 12:08:42 | smithi | main | rhel | 8.6 | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-comp-lz4 rados recovery-overrides/{more-async-recovery} supported-random-distro$/{rhel_8} thrashers/default thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
dead | 7224388 | 2023-03-28 22:48:37 | 2023-03-29 05:15:14 | smithi | main | centos | 8.stream | rados/cephadm/smoke/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop agent/off fixed-2 mon_election/connectivity start} | 2 | |||||
Failure Reason:
Error reimaging machines: Failed to power on smithi003 |
||||||||||||||
pass | 7224393 | 2023-03-28 22:48:38 | 2023-03-29 09:59:44 | 2023-03-29 10:25:18 | 0:25:34 | 0:16:41 | 0:08:53 | smithi | main | centos | 8.stream | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8} tasks/libcephsqlite} | 2 | |
pass | 7224396 | 2023-03-28 22:48:39 | 2023-03-29 10:00:15 | 2023-03-29 10:38:38 | 0:38:23 | 0:30:38 | 0:07:45 | smithi | main | rhel | 8.6 | rados/singleton-nomsgr/{all/multi-backfill-reject mon_election/classic rados supported-random-distro$/{rhel_8}} | 2 | |
fail | 7224401 | 2023-03-28 22:48:41 | 2023-03-29 10:00:55 | 2023-03-29 13:27:34 | 3:26:39 | 3:15:20 | 0:11:19 | smithi | main | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{default} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-dispatch-delay msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/rados_api_tests} | 2 | |
Failure Reason:
Command failed (workunit test rados/test.sh) on smithi042 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=e369dd579b62fff69e4b88c7d4fb7419fe60653c TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh' |
||||||||||||||
fail | 7224404 | 2023-03-28 22:48:42 | 2023-03-29 10:01:26 | 2023-03-29 10:23:06 | 0:21:40 | 0:10:09 | 0:11:31 | smithi | main | ubuntu | 20.04 | rados/singleton/{all/test_envlibrados_for_rocksdb/{supported/centos_latest test_envlibrados_for_rocksdb} mon_election/connectivity msgr-failures/none msgr/async-v2only objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest}} | 1 | |
Failure Reason:
Command failed (workunit test rados/test_envlibrados_for_rocksdb.sh) on smithi158 with status 2: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=e369dd579b62fff69e4b88c7d4fb7419fe60653c TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_envlibrados_for_rocksdb.sh' |
||||||||||||||
dead | 7224408 | 2023-03-28 22:48:43 | 2023-03-29 10:02:06 | 2023-03-29 22:12:02 | 12:09:56 | smithi | main | ubuntu | 20.04 | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/fastclose objectstore/bluestore-low-osd-mem-target rados recovery-overrides/{default} supported-random-distro$/{ubuntu_latest} thrashers/fastread thrashosds-health workloads/ec-small-objects-many-deletes} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 7224412 | 2023-03-28 22:48:44 | 2023-03-29 10:02:37 | 2023-03-29 10:31:09 | 0:28:32 | 0:18:04 | 0:10:28 | smithi | main | ubuntu | 20.04 | rados/cephadm/osds/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-ops/rm-zap-wait} | 2 | |
pass | 7224415 | 2023-03-28 22:48:45 | 2023-03-29 10:02:47 | 2023-03-29 10:32:53 | 0:30:06 | 0:19:58 | 0:10:08 | smithi | main | centos | 8.stream | rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus-v1only backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/crush-compat mon_election/classic msgr-failures/few rados thrashers/none thrashosds-health workloads/rbd_cls} | 3 | |
pass | 7224420 | 2023-03-28 22:48:46 | 2023-03-29 10:03:07 | 2023-03-29 10:32:29 | 0:29:22 | 0:22:57 | 0:06:25 | smithi | main | rhel | 8.6 | rados/singleton-nomsgr/{all/osd_stale_reads mon_election/connectivity rados supported-random-distro$/{rhel_8}} | 1 | |
fail | 7224423 | 2023-03-28 22:48:47 | 2023-03-29 10:03:08 | 2023-03-29 10:26:12 | 0:23:04 | 0:13:20 | 0:09:44 | smithi | main | centos | 8.stream | rados/standalone/{supported-random-distro$/{centos_8} workloads/scrub} | 1 | |
Failure Reason:
Command failed (workunit test scrub/osd-mapper.sh) on smithi084 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=e369dd579b62fff69e4b88c7d4fb7419fe60653c TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/scrub/osd-mapper.sh' |
||||||||||||||
dead | 7224426 | 2023-03-28 22:48:49 | 2023-03-29 10:03:08 | 2023-03-29 22:22:37 | 12:19:29 | smithi | main | rhel | 8.6 | rados/singleton/{all/thrash-backfill-full mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-zlib rados supported-random-distro$/{rhel_8}} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 7224431 | 2023-03-28 22:48:50 | 2023-03-29 10:03:19 | 2023-03-29 10:26:22 | 0:23:03 | 0:11:03 | 0:12:00 | smithi | main | ubuntu | 20.04 | rados/perf/{ceph mon_election/connectivity objectstore/bluestore-low-osd-mem-target openstack scheduler/dmclock_1Shard_16Threads settings/optimized ubuntu_latest workloads/fio_4K_rand_read} | 1 | |
dead | 7224434 | 2023-03-28 22:48:51 | 2023-03-29 10:03:19 | 2023-03-29 22:16:15 | 12:12:56 | smithi | main | centos | 8.stream | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/few objectstore/bluestore-stupid rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{centos_8} thrashers/fastread thrashosds-health workloads/ec-rados-plugin=lrc-k=4-m=2-l=3} | 3 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 7224439 | 2023-03-28 22:48:52 | 2023-03-29 10:05:50 | 2023-03-29 10:50:13 | 0:44:23 | 0:33:14 | 0:11:09 | smithi | main | ubuntu | 20.04 | rados/singleton-bluestore/{all/cephtool mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{ubuntu_latest}} | 1 | |
fail | 7224442 | 2023-03-28 22:48:53 | 2023-03-29 10:06:10 | 2023-03-29 10:40:12 | 0:34:02 | 0:23:08 | 0:10:54 | smithi | main | centos | 8.stream | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/fastclose msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8} thrashers/mapgap thrashosds-health workloads/radosbench-high-concurrency} | 2 | |
Failure Reason:
Command failed on smithi090 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd pool rm unique_pool_0 unique_pool_0 --yes-i-really-really-mean-it' |
||||||||||||||
pass | 7224445 | 2023-03-28 22:48:55 | 2023-03-29 10:06:21 | 2023-03-29 10:29:38 | 0:23:17 | 0:12:54 | 0:10:23 | smithi | main | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools agent/off mon_election/connectivity task/test_adoption} | 1 | |
pass | 7224450 | 2023-03-28 22:48:56 | 2023-03-29 10:06:21 | 2023-03-29 10:27:44 | 0:21:23 | 0:12:05 | 0:09:18 | smithi | main | centos | 8.stream | rados/multimon/{clusters/21 mon_election/connectivity msgr-failures/many msgr/async-v1only no_pools objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8} tasks/mon_clock_no_skews} | 3 | |
fail | 7224453 | 2023-03-28 22:48:57 | 2023-03-29 10:06:41 | 2023-03-29 16:48:52 | 6:42:11 | 6:28:08 | 0:14:03 | smithi | main | ubuntu | 20.04 | rados/singleton/{all/thrash-eio mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-zstd rados supported-random-distro$/{ubuntu_latest}} | 2 | |
Failure Reason:
reached maximum tries (3650) after waiting for 21900 seconds |
||||||||||||||
pass | 7224458 | 2023-03-28 22:48:58 | 2023-03-29 10:06:52 | 2023-03-29 10:26:53 | 0:20:01 | 0:13:23 | 0:06:38 | smithi | main | rhel | 8.6 | rados/singleton-nomsgr/{all/pool-access mon_election/classic rados supported-random-distro$/{rhel_8}} | 1 | |
pass | 7224461 | 2023-03-28 22:48:59 | 2023-03-29 10:07:12 | 2023-03-29 10:33:40 | 0:26:28 | 0:18:27 | 0:08:01 | smithi | main | rhel | 8.6 | rados/cephadm/smoke/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop agent/on fixed-2 mon_election/classic start} | 2 | |
fail | 7224466 | 2023-03-28 22:49:01 | 2023-03-29 10:08:23 | 2023-03-29 11:23:56 | 1:15:33 | 1:08:19 | 0:07:14 | smithi | main | rhel | 8.6 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{default} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{rhel_8} thrashers/morepggrow thrashosds-health workloads/radosbench} | 2 | |
Failure Reason:
reached maximum tries (500) after waiting for 3000 seconds |
||||||||||||||
fail | 7224469 | 2023-03-28 22:49:02 | 2023-03-29 10:08:43 | 2023-03-29 13:33:16 | 3:24:33 | 3:13:32 | 0:11:01 | smithi | main | centos | 8.stream | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_8} tasks/rados_api_tests} | 2 | |
Failure Reason:
Command failed (workunit test rados/test.sh) on smithi184 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=e369dd579b62fff69e4b88c7d4fb7419fe60653c TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh' |
||||||||||||||
fail | 7224474 | 2023-03-28 22:49:03 | 2023-03-29 10:09:54 | 2023-03-29 11:25:16 | 1:15:22 | 1:08:31 | 0:06:51 | smithi | main | rhel | 8.6 | rados/singleton/{all/thrash-rados/{thrash-rados thrashosds-health} mon_election/classic msgr-failures/none msgr/async-v2only objectstore/bluestore-hybrid rados supported-random-distro$/{rhel_8}} | 2 | |
Failure Reason:
Command failed (workunit test rados/load-gen-mix-small.sh) on smithi176 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=e369dd579b62fff69e4b88c7d4fb7419fe60653c TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/load-gen-mix-small.sh' |
||||||||||||||
pass | 7224477 | 2023-03-28 22:49:05 | 2023-03-29 10:09:54 | 2023-03-29 10:35:03 | 0:25:09 | 0:15:17 | 0:09:52 | smithi | main | centos | 8.stream | rados/cephadm/osds/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-ops/rmdir-reactivate} | 2 | |
pass | 7224480 | 2023-03-28 22:49:06 | 2023-03-29 10:09:54 | 2023-03-29 10:59:05 | 0:49:11 | 0:42:01 | 0:07:10 | smithi | main | rhel | 8.6 | rados/singleton-nomsgr/{all/recovery-unfound-found mon_election/connectivity rados supported-random-distro$/{rhel_8}} | 1 | |
pass | 7224485 | 2023-03-28 22:49:07 | 2023-03-29 10:09:55 | 2023-03-29 10:32:13 | 0:22:18 | 0:11:00 | 0:11:18 | smithi | main | ubuntu | 20.04 | rados/perf/{ceph mon_election/classic objectstore/bluestore-stupid openstack scheduler/dmclock_default_shards settings/optimized ubuntu_latest workloads/fio_4K_rand_rw} | 1 | |
pass | 7224488 | 2023-03-28 22:49:08 | 2023-03-29 10:10:05 | 2023-03-29 10:33:04 | 0:22:59 | 0:16:54 | 0:06:05 | smithi | main | rhel | 8.6 | rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/enable mon_election/connectivity random-objectstore$/{bluestore-comp-zlib} supported-random-distro$/{rhel_8} tasks/insights} | 2 | |
pass | 7224492 | 2023-03-28 22:49:09 | 2023-03-29 10:10:05 | 2023-03-29 10:42:25 | 0:32:20 | 0:25:33 | 0:06:47 | smithi | main | rhel | 8.6 | rados/singleton/{all/thrash_cache_writeback_proxy_none mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{rhel_8}} | 2 | |
pass | 7224496 | 2023-03-28 22:49:11 | 2023-03-29 10:11:16 | 2023-03-29 11:06:22 | 0:55:06 | 0:46:09 | 0:08:57 | smithi | main | centos | 8.stream | rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/mon-delay msgr/async-v2only objectstore/bluestore-bitmap rados supported-random-distro$/{centos_8} thrashers/sync workloads/rados_mon_osdmap_prune} | 2 | |
fail | 7224499 | 2023-03-28 22:49:12 | 2023-03-29 10:11:26 | 2023-03-29 10:38:44 | 0:27:18 | 0:16:57 | 0:10:21 | smithi | main | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools_crun agent/on mon_election/classic task/test_cephadm} | 1 | |
Failure Reason:
SELinux denials found on ubuntu@smithi129.front.sepia.ceph.com: ['type=AVC msg=audit(1680086121.146:19741): avc: denied { ioctl } for pid=122930 comm="iptables" path="/var/lib/containers/storage/overlay/6ff038bc32b14e02d416faa58673f995b5401f6d8a39eb39a96f02332077c0ff/merged" dev="overlay" ino=3409167 scontext=system_u:system_r:iptables_t:s0 tcontext=system_u:object_r:container_file_t:s0:c1022,c1023 tclass=dir permissive=1'] |
||||||||||||||
dead | 7224504 | 2023-03-28 22:49:13 | 2023-03-29 10:11:27 | 2023-03-29 22:22:01 | 12:10:34 | smithi | main | centos | 8.stream | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/osd-dispatch-delay objectstore/bluestore-comp-snappy rados recovery-overrides/{default} supported-random-distro$/{centos_8} thrashers/mapgap thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
dead | 7224507 | 2023-03-28 22:49:14 | 2023-03-29 10:12:47 | 2023-03-29 22:27:08 | 12:14:21 | smithi | main | ubuntu | 20.04 | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/fastclose objectstore/bluestore-comp-snappy rados recovery-overrides/{default} supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 7224512 | 2023-03-28 22:49:15 | 2023-03-29 10:15:08 | 2023-03-29 10:41:57 | 0:26:49 | 0:15:12 | 0:11:37 | smithi | main | centos | 8.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-zstd rados tasks/mon_recovery validater/lockdep} | 2 | |
pass | 7224515 | 2023-03-28 22:49:17 | 2023-03-29 10:16:49 | 2023-03-29 10:40:19 | 0:23:30 | 0:14:39 | 0:08:51 | smithi | main | centos | 8.stream | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{default} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-delay msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8} thrashers/none thrashosds-health workloads/redirect} | 2 | |
pass | 7224519 | 2023-03-28 22:49:18 | 2023-03-29 10:17:19 | 2023-03-29 10:42:23 | 0:25:04 | 0:14:25 | 0:10:39 | smithi | main | centos | 8.stream | rados/singleton-nomsgr/{all/version-number-sanity mon_election/classic rados supported-random-distro$/{centos_8}} | 1 | |
dead | 7224523 | 2023-03-28 22:49:19 | 2023-03-29 22:37:50 | 2023-03-30 10:46:34 | 12:08:44 | smithi | main | centos | 8.stream | rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/osd-delay rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{centos_8} thrashers/minsize_recovery thrashosds-health workloads/ec-small-objects-overwrites} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
dead | 7224526 | 2023-03-28 22:49:20 | 2023-03-29 22:37:50 | 2023-03-30 10:45:49 | 12:07:59 | smithi | main | rhel | 8.6 | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/few objectstore/bluestore-stupid rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{rhel_8} thrashers/minsize_recovery thrashosds-health workloads/ec-small-objects} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 7224531 | 2023-03-28 22:49:21 | 2023-03-29 22:37:51 | 2023-03-29 23:00:57 | 0:23:06 | 0:17:03 | 0:06:03 | smithi | main | rhel | 8.6 | rados/cephadm/smoke/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop agent/off fixed-2 mon_election/connectivity start} | 2 | |
pass | 7224534 | 2023-03-28 22:49:23 | 2023-03-29 22:37:51 | 2023-03-29 22:57:56 | 0:20:05 | 0:12:58 | 0:07:07 | smithi | main | rhel | 8.6 | rados/singleton/{all/watch-notify-same-primary mon_election/classic msgr-failures/many msgr/async-v1only objectstore/bluestore-stupid rados supported-random-distro$/{rhel_8}} | 1 | |
fail | 7224539 | 2023-03-28 22:49:24 | 2023-03-29 22:38:42 | 2023-03-29 22:54:35 | 0:15:53 | 0:06:24 | 0:09:29 | smithi | main | ubuntu | 20.04 | rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/radosbench cluster/1-node k8s/1.21 net/flannel rook/1.7.2} | 1 | |
Failure Reason:
Command failed on smithi019 with status 1: 'sudo systemctl enable --now kubelet && sudo kubeadm config images pull' |
||||||||||||||
pass | 7224542 | 2023-03-28 22:49:25 | 2023-03-29 22:38:42 | 2023-03-29 23:58:54 | 1:20:12 | 1:08:49 | 0:11:23 | smithi | main | centos | 8.stream | rados/dashboard/{0-single-container-host debug/mgr mon_election/classic random-objectstore$/{bluestore-comp-snappy} tasks/dashboard} | 2 | |
fail | 7224547 | 2023-03-28 22:49:26 | 2023-03-29 22:39:53 | 2023-03-29 23:49:18 | 1:09:25 | 0:54:06 | 0:15:19 | smithi | main | ubuntu | 20.04 | rados/objectstore/{backends/ceph_objectstore_tool supported-random-distro$/{ubuntu_latest}} | 1 | |
Failure Reason:
Command crashed: 'sudo ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 --journal-path /var/lib/ceph/osd/ceph-0/journal --force --op remove --pgid 3.b' |
||||||||||||||
pass | 7224550 | 2023-03-28 22:49:28 | 2023-03-29 22:43:33 | 2023-03-29 23:08:27 | 0:24:54 | 0:18:32 | 0:06:22 | smithi | main | rhel | 8.6 | rados/rest/{mgr-restful supported-random-distro$/{rhel_8}} | 1 | |
pass | 7224553 | 2023-03-28 22:49:29 | 2023-03-29 22:43:34 | 2023-03-29 23:11:52 | 0:28:18 | 0:17:48 | 0:10:30 | smithi | main | ubuntu | 20.04 | rados/singleton-nomsgr/{all/admin_socket_output mon_election/classic rados supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 7224558 | 2023-03-28 22:49:30 | 2023-03-29 22:43:44 | 2023-03-29 23:06:04 | 0:22:20 | 0:17:10 | 0:05:10 | smithi | main | rhel | 8.6 | rados/standalone/{supported-random-distro$/{rhel_8} workloads/c2c} | 1 | |
fail | 7224561 | 2023-03-28 22:49:31 | 2023-03-29 22:43:55 | 2023-03-30 02:55:57 | 4:12:02 | 4:01:08 | 0:10:54 | smithi | main | centos | 8.stream | rados/upgrade/parallel/{0-random-distro$/{centos_8.stream_container_tools_crun} 0-start 1-tasks mon_election/classic upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} | 2 | |
Failure Reason:
Command failed (workunit test rbd/test_librbd.sh) on smithi006 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=quincy TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/test_librbd.sh' |
||||||||||||||
pass | 7224566 | 2023-03-28 22:49:32 | 2023-03-29 22:44:15 | 2023-03-29 23:15:11 | 0:30:56 | 0:22:17 | 0:08:39 | smithi | main | centos | 8.stream | rados/valgrind-leaks/{1-start 2-inject-leak/mon centos_latest} | 1 | |
dead | 7224569 | 2023-03-28 22:49:34 | 2023-03-30 10:53:58 | smithi | main | rhel | 8.6 | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/osd-delay objectstore/bluestore-bitmap rados recovery-overrides/{more-active-recovery} supported-random-distro$/{rhel_8} thrashers/mapgap thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} | 3 | |||||
Failure Reason:
hit max job timeout |
||||||||||||||
dead | 7224572 | 2023-03-28 22:49:35 | 2023-03-29 22:44:36 | 2023-03-30 10:54:21 | 12:09:45 | smithi | main | rhel | 8.6 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{default} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-dispatch-delay msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{rhel_8} thrashers/pggrow thrashosds-health workloads/redirect_promote_tests} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 7224577 | 2023-03-28 22:49:36 | 2023-03-29 22:45:06 | 2023-03-29 23:01:45 | 0:16:39 | 0:10:33 | 0:06:06 | smithi | main | rhel | 8.6 | rados/cephadm/workunits/{0-distro/rhel_8.6_container_tools_3.0 agent/off mon_election/connectivity task/test_cephadm_repos} | 1 | |
pass | 7224580 | 2023-03-28 22:49:37 | 2023-03-29 22:45:06 | 2023-03-29 23:16:28 | 0:31:22 | 0:20:05 | 0:11:17 | smithi | main | ubuntu | 20.04 | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{ubuntu_latest} tasks/rados_cls_all} | 2 | |
fail | 7224585 | 2023-03-28 22:49:38 | 2023-03-29 22:45:27 | 2023-03-29 23:11:18 | 0:25:51 | 0:13:20 | 0:12:31 | smithi | main | centos | 8.stream | rados/singleton/{all/test_envlibrados_for_rocksdb/{supported/rhel_latest test_envlibrados_for_rocksdb} mon_election/connectivity msgr-failures/none msgr/async-v2only objectstore/bluestore-bitmap rados supported-random-distro$/{centos_8}} | 1 | |
Failure Reason:
Command failed (workunit test rados/test_envlibrados_for_rocksdb.sh) on smithi033 with status 2: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=e369dd579b62fff69e4b88c7d4fb7419fe60653c TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_envlibrados_for_rocksdb.sh' |
||||||||||||||
pass | 7224588 | 2023-03-28 22:49:40 | 2023-03-29 22:48:38 | 2023-03-29 23:07:30 | 0:18:52 | 0:08:22 | 0:10:30 | smithi | main | ubuntu | 20.04 | rados/multimon/{clusters/3 mon_election/classic msgr-failures/few msgr/async-v2only no_pools objectstore/bluestore-hybrid rados supported-random-distro$/{ubuntu_latest} tasks/mon_clock_with_skews} | 2 | |
pass | 7224592 | 2023-03-28 22:49:41 | 2023-03-29 22:49:28 | 2023-03-29 23:09:05 | 0:19:37 | 0:10:13 | 0:09:24 | smithi | main | ubuntu | 20.04 | rados/singleton-nomsgr/{all/balancer mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} | 1 | |
dead | 7224596 | 2023-03-28 22:49:42 | 2023-03-29 22:49:29 | 2023-03-30 11:01:17 | 12:11:48 | smithi | main | centos | 8.stream | rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/3-size-2-min-size 1-install/nautilus-v2only backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/on mon_election/connectivity msgr-failures/osd-delay rados thrashers/pggrow thrashosds-health workloads/snaps-few-objects} | 3 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 7224599 | 2023-03-28 22:49:43 | 2023-03-29 22:50:09 | 2023-03-29 23:14:25 | 0:24:16 | 0:13:58 | 0:10:18 | smithi | main | centos | 8.stream | rados/cephadm/osds/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-ops/repave-all} | 2 | |
pass | 7224602 | 2023-03-28 22:49:45 | 2023-03-29 22:51:20 | 2023-03-29 23:11:35 | 0:20:15 | 0:10:38 | 0:09:37 | smithi | main | ubuntu | 20.04 | rados/perf/{ceph mon_election/connectivity objectstore/bluestore-basic-min-osd-mem-target openstack scheduler/wpq_default_shards settings/optimized ubuntu_latest workloads/fio_4M_rand_read} | 1 | |
pass | 7224604 | 2023-03-28 22:49:46 | 2023-03-29 22:51:20 | 2023-03-29 23:12:44 | 0:21:24 | 0:11:18 | 0:10:06 | smithi | main | centos | 8.stream | rados/singleton/{all/admin-socket mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8}} | 1 | |
dead | 7224606 | 2023-03-28 22:49:47 | 2023-03-30 11:00:13 | smithi | main | centos | 8.stream | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/fastclose msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/redirect_set_object} | 2 | |||||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 7224609 | 2023-03-28 22:49:48 | 2023-03-29 22:51:31 | 2023-03-29 23:17:15 | 0:25:44 | 0:13:03 | 0:12:41 | smithi | main | centos | 8.stream | rados/cephadm/smoke-singlehost/{0-random-distro$/{centos_8.stream_container_tools_crun} 1-start 2-services/basic 3-final} | 1 | |
pass | 7224611 | 2023-03-28 22:49:49 | 2023-03-29 22:54:12 | 2023-03-29 23:17:06 | 0:22:54 | 0:11:23 | 0:11:31 | smithi | main | ubuntu | 20.04 | rados/singleton-nomsgr/{all/cache-fs-trunc mon_election/classic rados supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 7224614 | 2023-03-28 22:49:51 | 2023-03-29 22:54:12 | 2023-03-29 23:25:27 | 0:31:15 | 0:25:34 | 0:05:41 | smithi | main | rhel | 8.6 | rados/singleton/{all/backfill-toofull mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-snappy rados supported-random-distro$/{rhel_8}} | 1 | |
pass | 7224616 | 2023-03-28 22:49:52 | 2023-03-29 22:54:43 | 2023-03-29 23:34:09 | 0:39:26 | 0:29:41 | 0:09:45 | smithi | main | centos | 8.stream | rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/disable mon_election/classic random-objectstore$/{bluestore-hybrid} supported-random-distro$/{centos_8} tasks/module_selftest} | 2 | |
pass | 7224619 | 2023-03-28 22:49:53 | 2023-03-29 22:54:53 | 2023-03-29 23:21:16 | 0:26:23 | 0:15:50 | 0:10:33 | smithi | main | centos | 8.stream | rados/cephadm/workunits/{0-distro/rhel_8.6_container_tools_rhel8 agent/on mon_election/classic task/test_iscsi_pids_limit/{centos_8.stream_container_tools test_iscsi_pids_limit}} | 1 | |
pass | 7224621 | 2023-03-28 22:49:54 | 2023-03-29 22:55:54 | 2023-03-29 23:38:50 | 0:42:56 | 0:30:42 | 0:12:14 | smithi | main | centos | 8.stream | rados/monthrash/{ceph clusters/9-mons mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8} thrashers/force-sync-many workloads/rados_mon_workunits} | 2 | |
dead | 7224624 | 2023-03-28 22:49:56 | 2023-03-29 22:56:04 | 2023-03-30 11:06:19 | 12:10:15 | smithi | main | centos | 8.stream | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/fastclose objectstore/bluestore-comp-zlib rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{centos_8} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
dead | 7224626 | 2023-03-28 22:49:57 | 2023-03-29 22:56:24 | 2023-03-30 11:10:06 | 12:13:42 | smithi | main | rhel | 8.6 | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/connectivity msgr-failures/few objectstore/bluestore-comp-zlib rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{rhel_8} thrashers/default thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 7224628 | 2023-03-28 22:49:58 | 2023-03-29 22:59:25 | 2023-03-30 05:42:18 | 6:42:53 | 6:28:38 | 0:14:15 | smithi | main | centos | 8.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v1only objectstore/bluestore-hybrid rados tasks/rados_api_tests validater/valgrind} | 2 | |
Failure Reason:
Command failed (workunit test rados/test.sh) on smithi088 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=e369dd579b62fff69e4b88c7d4fb7419fe60653c TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 ALLOW_TIMEOUTS=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh' |
||||||||||||||
dead | 7224631 | 2023-03-28 22:49:59 | 2023-03-29 23:00:06 | 2023-03-30 11:10:11 | 12:10:05 | smithi | main | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/set-chunks-read} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 7224633 | 2023-03-28 22:50:00 | 2023-03-29 23:01:06 | 2023-03-29 23:20:56 | 0:19:50 | 0:13:21 | 0:06:29 | smithi | main | rhel | 8.6 | rados/singleton-nomsgr/{all/ceph-kvstore-tool mon_election/connectivity rados supported-random-distro$/{rhel_8}} | 1 | |
pass | 7224636 | 2023-03-28 22:50:01 | 2023-03-29 23:01:07 | 2023-03-29 23:22:28 | 0:21:21 | 0:14:54 | 0:06:27 | smithi | main | rhel | 8.6 | rados/singleton/{all/deduptool mon_election/classic msgr-failures/none msgr/async-v2only objectstore/bluestore-comp-zlib rados supported-random-distro$/{rhel_8}} | 1 | |
pass | 7224638 | 2023-03-28 22:50:03 | 2023-03-29 23:01:47 | 2023-03-29 23:29:54 | 0:28:07 | 0:16:59 | 0:11:08 | smithi | main | centos | 8.stream | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{centos_8} tasks/rados_python} | 2 | |
dead | 7224640 | 2023-03-28 22:50:04 | 2023-03-29 23:02:48 | 2023-03-30 11:13:53 | 12:11:05 | smithi | main | ubuntu | 20.04 | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/osd-delay objectstore/bluestore-stupid rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=clay-k=4-m=2} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 7224643 | 2023-03-28 22:50:05 | 2023-03-29 23:03:18 | 2023-03-29 23:39:12 | 0:35:54 | 0:24:56 | 0:10:58 | smithi | main | ubuntu | 20.04 | rados/cephadm/smoke/{0-distro/ubuntu_20.04 0-nvme-loop agent/on fixed-2 mon_election/classic start} | 2 | |
pass | 7224645 | 2023-03-28 22:50:06 | 2023-03-29 23:03:19 | 2023-03-29 23:23:15 | 0:19:56 | 0:10:43 | 0:09:13 | smithi | main | ubuntu | 20.04 | rados/perf/{ceph mon_election/classic objectstore/bluestore-bitmap openstack scheduler/dmclock_1Shard_16Threads settings/optimized ubuntu_latest workloads/fio_4M_rand_rw} | 1 | |
pass | 7224647 | 2023-03-28 22:50:07 | 2023-03-29 23:24:16 | 510 | smithi | main | ubuntu | 20.04 | rados/singleton-nomsgr/{all/ceph-post-file mon_election/classic rados supported-random-distro$/{ubuntu_latest}} | 1 | ||||
pass | 7224650 | 2023-03-28 22:50:09 | 2023-03-29 23:03:19 | 2023-03-29 23:38:51 | 0:35:32 | 0:24:26 | 0:11:06 | smithi | main | centos | 8.stream | rados/standalone/{supported-random-distro$/{centos_8} workloads/crush} | 1 | |
pass | 7224652 | 2023-03-28 22:50:10 | 2023-03-29 23:06:10 | 2023-03-29 23:27:29 | 0:21:19 | 0:10:18 | 0:11:01 | smithi | main | ubuntu | 20.04 | rados/singleton/{all/divergent_priors mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-zstd rados supported-random-distro$/{ubuntu_latest}} | 1 | |
dead | 7224655 | 2023-03-28 22:50:11 | 2023-03-29 23:07:30 | 2023-03-30 11:19:10 | 12:11:40 | smithi | main | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-delay msgr/async objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_latest} thrashers/mapgap thrashosds-health workloads/small-objects-balanced} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 7224657 | 2023-03-28 22:50:12 | 2023-03-29 23:08:31 | 2023-03-29 23:33:03 | 0:24:32 | 0:14:18 | 0:10:14 | smithi | main | centos | 8.stream | rados/cephadm/osds/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop 1-start 2-ops/rm-zap-add} | 2 | |
dead | 7224660 | 2023-03-28 22:50:13 | 2023-03-29 23:09:21 | 2023-03-30 11:19:19 | 12:09:58 | smithi | main | ubuntu | 20.04 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/osd-dispatch-delay rados recovery-overrides/{more-async-recovery} supported-random-distro$/{ubuntu_latest} thrashers/morepggrow thrashosds-health workloads/ec-snaps-few-objects-overwrites} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
dead | 7224662 | 2023-03-28 22:50:14 | 2023-03-29 23:09:42 | 2023-03-30 11:19:56 | 12:10:14 | smithi | main | ubuntu | 20.04 | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-comp-lz4 rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=lrc-k=4-m=2-l=3} | 3 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 7224665 | 2023-03-28 22:50:16 | 2023-03-29 23:10:12 | 2023-03-29 23:35:38 | 0:25:26 | 0:15:46 | 0:09:40 | smithi | main | centos | 8.stream | rados/multimon/{clusters/6 mon_election/connectivity msgr-failures/many msgr/async no_pools objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_8} tasks/mon_recovery} | 2 | |
pass | 7224667 | 2023-03-28 22:50:17 | 2023-03-29 23:10:23 | 2023-03-29 23:53:00 | 0:42:37 | 0:32:37 | 0:10:00 | smithi | main | ubuntu | 20.04 | rados/cephadm/workunits/{0-distro/ubuntu_20.04 agent/off mon_election/connectivity task/test_nfs} | 1 | |
fail | 7224669 | 2023-03-28 22:50:18 | 2023-03-29 23:11:23 | 2023-03-29 23:54:52 | 0:43:29 | 0:32:50 | 0:10:39 | smithi | main | centos | 8.stream | rados/singleton/{all/divergent_priors2 mon_election/classic msgr-failures/many msgr/async-v1only objectstore/bluestore-hybrid rados supported-random-distro$/{centos_8}} | 1 | |
Failure Reason:
Command crashed: 'sudo adjust-ulimits ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-2 --journal-path /var/lib/ceph/osd/ceph-2/journal --log-file=/var/log/ceph/objectstore_tool.$$.log --op export-remove --pgid 2.0 --file /home/ubuntu/cephtest/exp.32116.out' |
||||||||||||||
pass | 7224672 | 2023-03-28 22:50:19 | 2023-03-29 23:11:24 | 2023-03-29 23:31:32 | 0:20:08 | 0:09:44 | 0:10:24 | smithi | main | ubuntu | 20.04 | rados/singleton-nomsgr/{all/crushdiff mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 7224674 | 2023-03-28 22:50:20 | 2023-03-29 23:11:24 | 2023-03-29 23:54:45 | 0:43:21 | 0:36:37 | 0:06:44 | smithi | main | rhel | 8.6 | rados/singleton-bluestore/{all/cephtool mon_election/classic msgr-failures/none msgr/async-v2only objectstore/bluestore-comp-snappy rados supported-random-distro$/{rhel_8}} | 1 | |
pass | 7224676 | 2023-03-28 22:50:21 | 2023-03-29 23:11:44 | 2023-03-29 23:31:06 | 0:19:22 | 0:13:03 | 0:06:19 | smithi | main | rhel | 8.6 | rados/objectstore/{backends/fusestore supported-random-distro$/{rhel_8}} | 1 | |
dead | 7224679 | 2023-03-28 22:50:23 | 2023-03-29 23:11:55 | 2023-03-30 11:21:26 | 12:09:31 | smithi | main | centos | 8.stream | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-dispatch-delay msgr/async-v1only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8} thrashers/morepggrow thrashosds-health workloads/small-objects-localized} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 7224681 | 2023-03-28 22:50:24 | 2023-03-29 23:12:15 | 2023-03-29 23:37:58 | 0:25:43 | 0:16:29 | 0:09:14 | smithi | main | centos | 8.stream | rados/cephadm/smoke/{0-distro/centos_8.stream_container_tools 0-nvme-loop agent/off fixed-2 mon_election/connectivity start} | 2 | |
pass | 7224684 | 2023-03-28 22:50:25 | 2023-03-29 23:12:46 | 2023-03-29 23:36:25 | 0:23:39 | 0:13:23 | 0:10:16 | smithi | main | centos | 8.stream | rados/singleton/{all/dump-stuck mon_election/connectivity msgr-failures/none msgr/async-v2only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_8}} | 1 | |
pass | 7224686 | 2023-03-28 22:50:26 | 2023-03-29 23:12:46 | 2023-03-29 23:40:52 | 0:28:06 | 0:18:16 | 0:09:50 | smithi | main | centos | 8.stream | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_8} tasks/rados_stress_watch} | 2 | |
pass | 7224688 | 2023-03-28 22:50:27 | 2023-03-29 23:13:57 | 2023-03-29 23:34:25 | 0:20:28 | 0:08:54 | 0:11:34 | smithi | main | ubuntu | 20.04 | rados/singleton-nomsgr/{all/export-after-evict mon_election/classic rados supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 7224691 | 2023-03-28 22:50:29 | 2023-03-29 23:14:17 | 2023-03-29 23:45:39 | 0:31:22 | 0:22:02 | 0:09:20 | smithi | main | centos | 8.stream | rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus-v2only backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat mon_election/classic msgr-failures/fastclose rados thrashers/careful thrashosds-health workloads/test_rbd_api} | 3 | |
pass | 7224693 | 2023-03-28 22:50:30 | 2023-03-29 23:14:27 | 2023-03-29 23:37:16 | 0:22:49 | 0:11:00 | 0:11:49 | smithi | main | ubuntu | 20.04 | rados/perf/{ceph mon_election/connectivity objectstore/bluestore-comp openstack scheduler/dmclock_default_shards settings/optimized ubuntu_latest workloads/fio_4M_rand_write} | 1 | |
pass | 7224696 | 2023-03-28 22:50:31 | 2023-03-29 23:15:18 | 2023-03-29 23:50:27 | 0:35:09 | 0:29:22 | 0:05:47 | smithi | main | rhel | 8.6 | rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/enable mon_election/connectivity random-objectstore$/{bluestore-comp-lz4} supported-random-distro$/{rhel_8} tasks/progress} | 2 | |
pass | 7224698 | 2023-03-28 22:50:32 | 2023-03-29 23:15:18 | 2023-03-29 23:40:25 | 0:25:07 | 0:16:39 | 0:08:28 | smithi | main | rhel | 8.6 | rados/cephadm/osds/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop 1-start 2-ops/rm-zap-flag} | 2 | |
fail | 7224701 | 2023-03-28 22:50:34 | 2023-03-29 23:16:29 | 2023-03-30 00:05:27 | 0:48:58 | 0:36:54 | 0:12:04 | smithi | main | centos | 8.stream | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few objectstore/bluestore-comp-zstd rados recovery-overrides/{more-async-recovery} supported-random-distro$/{centos_8} thrashers/none thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 | |
Failure Reason:
failed to complete snap trimming before timeout |
||||||||||||||
dead | 7224703 | 2023-03-28 22:50:35 | 2023-03-29 23:17:19 | 2023-03-30 11:33:08 | 12:15:49 | smithi | main | ubuntu | 20.04 | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/osd-delay objectstore/bluestore-comp-zstd rados recovery-overrides/{more-active-recovery} supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 7224706 | 2023-03-28 22:50:36 | 2023-03-29 23:21:20 | 2023-03-30 00:07:11 | 0:45:51 | 0:34:39 | 0:11:12 | smithi | main | ubuntu | 20.04 | rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/mon-delay msgr/async-v1only objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest} thrashers/many workloads/snaps-few-objects} | 2 | |
dead | 7224708 | 2023-03-28 22:50:37 | 2023-03-29 23:23:21 | 2023-03-30 11:35:01 | 12:11:40 | smithi | main | centos | 8.stream | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/fastclose msgr/async-v2only objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8} thrashers/none thrashosds-health workloads/small-objects} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 7224710 | 2023-03-28 22:50:38 | 2023-03-29 23:25:22 | 2023-03-30 00:59:36 | 1:34:14 | 1:24:07 | 0:10:07 | smithi | main | ubuntu | 20.04 | rados/singleton/{all/ec-inconsistent-hinfo mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-stupid rados supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 7224713 | 2023-03-28 22:50:40 | 2023-03-29 23:25:22 | 2023-03-29 23:59:55 | 0:34:33 | 0:20:38 | 0:13:55 | smithi | main | centos | 8.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-low-osd-mem-target rados tasks/rados_cls_all validater/lockdep} | 2 | |
pass | 7224715 | 2023-03-28 22:50:41 | 2023-03-29 23:27:03 | 2023-03-29 23:48:14 | 0:21:11 | 0:11:55 | 0:09:16 | smithi | main | centos | 8.stream | rados/singleton-nomsgr/{all/full-tiering mon_election/connectivity rados supported-random-distro$/{centos_8}} | 1 | |
dead | 7224718 | 2023-03-28 22:50:42 | 2023-03-29 23:27:03 | 2023-03-30 11:36:55 | 12:09:52 | smithi | main | ubuntu | 20.04 | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-bitmap rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=jerasure-k=2-m=1} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 7224720 | 2023-03-28 22:50:43 | 2023-03-29 23:27:04 | 2023-03-29 23:51:56 | 0:24:52 | 0:16:33 | 0:08:19 | smithi | main | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools agent/on mon_election/classic task/test_orch_cli} | 1 | |
Failure Reason:
Test failure: test_cephfs_mirror (tasks.cephadm_cases.test_cli.TestCephadmCLI) |
||||||||||||||
pass | 7224722 | 2023-03-28 22:50:44 | 2023-03-29 23:27:04 | 2023-03-30 00:11:29 | 0:44:25 | 0:33:19 | 0:11:06 | smithi | main | ubuntu | 20.04 | rados/singleton/{all/ec-lost-unfound mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 7224725 | 2023-03-28 22:50:45 | 2023-03-29 23:27:34 | 2023-03-29 23:49:09 | 0:21:35 | 0:10:21 | 0:11:14 | smithi | main | ubuntu | 20.04 | rados/singleton-nomsgr/{all/health-warnings mon_election/classic rados supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 7224727 | 2023-03-28 22:50:47 | 2023-03-29 23:29:15 | 2023-03-30 01:04:47 | 1:35:32 | 1:29:48 | 0:05:44 | smithi | main | rhel | 8.6 | rados/standalone/{supported-random-distro$/{rhel_8} workloads/erasure-code} | 1 | |
dead | 7224730 | 2023-03-28 22:50:48 | 2023-03-29 23:29:15 | 2023-03-30 11:38:00 | 12:08:45 | smithi | main | centos | 8.stream | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{default} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_8} thrashers/pggrow thrashosds-health workloads/snaps-few-objects-balanced} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 7224732 | 2023-03-28 22:50:49 | 2023-03-29 23:29:46 | 2023-03-29 23:55:13 | 0:25:27 | 0:15:56 | 0:09:31 | smithi | main | centos | 8.stream | rados/cephadm/smoke/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop agent/on fixed-2 mon_election/classic start} | 2 | |
dead | 7224733 | 2023-03-28 22:50:50 | 2023-03-29 23:29:56 | 2023-03-30 11:42:08 | 12:12:12 | smithi | main | centos | 8.stream | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/fastclose objectstore/bluestore-comp-snappy rados recovery-overrides/{more-active-recovery} supported-random-distro$/{centos_8} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} | 3 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 7224734 | 2023-03-28 22:50:51 | 2023-03-29 23:31:47 | 2023-03-29 23:54:48 | 0:23:01 | 0:11:40 | 0:11:21 | smithi | main | centos | 8.stream | rados/multimon/{clusters/9 mon_election/classic msgr-failures/few msgr/async-v1only no_pools objectstore/bluestore-stupid rados supported-random-distro$/{centos_8} tasks/mon_clock_no_skews} | 3 | |
pass | 7224735 | 2023-03-28 22:50:52 | 2023-03-29 23:33:07 | 2023-03-29 23:53:16 | 0:20:09 | 0:11:56 | 0:08:13 | smithi | main | rhel | 8.6 | rados/singleton/{all/erasure-code-nonregression mon_election/classic msgr-failures/none msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8}} | 1 | |
pass | 7224736 | 2023-03-28 22:50:54 | 2023-03-29 23:34:18 | 2023-03-29 23:54:25 | 0:20:07 | 0:08:56 | 0:11:11 | smithi | main | ubuntu | 20.04 | rados/perf/{ceph mon_election/classic objectstore/bluestore-low-osd-mem-target openstack scheduler/wpq_default_shards settings/optimized ubuntu_latest workloads/radosbench_4K_rand_read} | 1 | |
pass | 7224737 | 2023-03-28 22:50:55 | 2023-03-29 23:34:18 | 2023-03-29 23:56:46 | 0:22:28 | 0:11:22 | 0:11:06 | smithi | main | centos | 8.stream | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{centos_8} tasks/rados_striper} | 2 | |
pass | 7224738 | 2023-03-28 22:50:56 | 2023-03-29 23:35:39 | 2023-03-30 00:13:19 | 0:37:40 | 0:25:20 | 0:12:20 | smithi | main | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools_crun agent/off mon_election/connectivity task/test_orch_cli_mon} | 5 | |
fail | 7224739 | 2023-03-28 22:50:57 | 2023-03-29 23:37:29 | 2023-03-29 23:58:38 | 0:21:09 | 0:06:29 | 0:14:40 | smithi | main | ubuntu | 20.04 | rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/none cluster/3-node k8s/1.21 net/host rook/master} | 3 | |
Failure Reason:
Command failed on smithi031 with status 1: 'sudo systemctl enable --now kubelet && sudo kubeadm config images pull' |
||||||||||||||
fail | 7224740 | 2023-03-28 22:50:58 | 2023-03-29 23:39:00 | 2023-03-30 00:16:25 | 0:37:25 | 0:26:45 | 0:10:40 | smithi | main | centos | 8.stream | rados/dashboard/{0-single-container-host debug/mgr mon_election/connectivity random-objectstore$/{bluestore-low-osd-mem-target} tasks/e2e} | 2 | |
Failure Reason:
Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi187 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=e369dd579b62fff69e4b88c7d4fb7419fe60653c TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh' |
||||||||||||||
pass | 7224741 | 2023-03-28 22:50:59 | 2023-03-29 23:39:00 | 2023-03-29 23:58:13 | 0:19:13 | 0:13:47 | 0:05:26 | smithi | main | rhel | 8.6 | rados/singleton-nomsgr/{all/large-omap-object-warnings mon_election/connectivity rados supported-random-distro$/{rhel_8}} | 1 | |
dead | 7224742 | 2023-03-28 22:51:00 | 2023-03-29 23:39:21 | 2023-03-30 11:50:21 | 12:11:00 | smithi | main | centos | 8.stream | rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/fastclose rados recovery-overrides/{more-active-recovery} supported-random-distro$/{centos_8} thrashers/pggrow thrashosds-health workloads/ec-pool-snaps-few-objects-overwrites} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
dead | 7224743 | 2023-03-28 22:51:02 | 2023-03-29 23:40:31 | 2023-03-30 11:51:32 | 12:11:01 | smithi | main | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-delay msgr/async-v1only objectstore/bluestore-comp-zstd rados supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/snaps-few-objects-localized} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 7224744 | 2023-03-28 22:51:03 | 2023-03-29 23:41:02 | 2023-03-30 00:24:07 | 0:43:05 | 0:33:18 | 0:09:47 | smithi | main | centos | 8.stream | rados/singleton/{all/lost-unfound-delete mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8}} | 1 | |
pass | 7224745 | 2023-03-28 22:51:04 | 2023-03-29 23:41:02 | 2023-03-30 00:06:26 | 0:25:24 | 0:15:35 | 0:09:49 | smithi | main | rhel | 8.6 | rados/cephadm/osds/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-ops/rm-zap-wait} | 2 | |
pass | 7224746 | 2023-03-28 22:51:05 | 2023-03-29 23:45:43 | 2023-03-30 00:08:51 | 0:23:08 | 0:13:13 | 0:09:55 | smithi | main | centos | 8.stream | rados/singleton-nomsgr/{all/lazy_omap_stats_output mon_election/classic rados supported-random-distro$/{centos_8}} | 1 | |
pass | 7224747 | 2023-03-28 22:51:06 | 2023-03-29 23:45:43 | 2023-03-30 00:12:22 | 0:26:39 | 0:14:39 | 0:12:00 | smithi | main | centos | 8.stream | rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/disable mon_election/classic random-objectstore$/{bluestore-low-osd-mem-target} supported-random-distro$/{centos_8} tasks/prometheus} | 2 | |
pass | 7224748 | 2023-03-28 22:51:07 | 2023-03-29 23:49:14 | 2023-03-30 00:18:58 | 0:29:44 | 0:17:29 | 0:12:15 | smithi | main | ubuntu | 20.04 | rados/objectstore/{backends/keyvaluedb supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 7224749 | 2023-03-28 22:51:09 | 2023-03-29 23:49:25 | 2023-03-30 00:31:44 | 0:42:19 | 0:31:47 | 0:10:32 | smithi | main | centos | 8.stream | rados/singleton/{all/lost-unfound mon_election/classic msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_8}} | 1 | |
pass | 7224750 | 2023-03-28 22:51:10 | 2023-03-29 23:50:35 | 2023-03-30 00:13:33 | 0:22:58 | 0:14:11 | 0:08:47 | smithi | main | centos | 8.stream | rados/cephadm/smoke-singlehost/{0-random-distro$/{centos_8.stream_container_tools} 1-start 2-services/rgw 3-final} | 1 | |
dead | 7224751 | 2023-03-28 22:51:11 | 2023-03-29 23:50:36 | 2023-03-30 12:02:09 | 12:11:33 | smithi | main | ubuntu | 20.04 | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/osd-delay objectstore/bluestore-hybrid rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
dead | 7224752 | 2023-03-28 22:51:12 | 2023-03-29 23:53:06 | 2023-03-30 12:03:49 | 12:10:43 | smithi | main | rhel | 8.6 | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-hybrid rados recovery-overrides/{more-active-recovery} supported-random-distro$/{rhel_8} thrashers/default thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 7224753 | 2023-03-28 22:51:13 | 2023-03-29 23:54:57 | 2023-03-30 00:24:57 | 0:30:00 | 0:19:06 | 0:10:54 | smithi | main | ubuntu | 20.04 | rados/monthrash/{ceph clusters/9-mons mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zlib rados supported-random-distro$/{ubuntu_latest} thrashers/one workloads/pool-create-delete} | 2 | |
dead | 7224754 | 2023-03-28 22:51:14 | 2023-03-29 23:54:57 | 2023-03-30 12:03:56 | 12:08:59 | smithi | main | rhel | 8.6 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-dispatch-delay msgr/async-v2only objectstore/bluestore-hybrid rados supported-random-distro$/{rhel_8} thrashers/default thrashosds-health workloads/snaps-few-objects} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 7224755 | 2023-03-28 22:51:16 | 2023-03-29 23:55:18 | 2023-03-30 00:35:54 | 0:40:36 | 0:30:40 | 0:09:56 | smithi | main | centos | 8.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-stupid rados tasks/mon_recovery validater/valgrind} | 2 | |
Failure Reason:
saw valgrind issues |
||||||||||||||
pass | 7224756 | 2023-03-28 22:51:17 | 2023-03-29 23:56:49 | 2023-03-30 00:18:52 | 0:22:03 | 0:12:57 | 0:09:06 | smithi | main | ubuntu | 20.04 | rados/singleton-nomsgr/{all/librados_hello_world mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 7224757 | 2023-03-28 22:51:18 | 2023-03-29 23:56:49 | 2023-03-30 00:18:13 | 0:21:24 | 0:09:11 | 0:12:13 | smithi | main | ubuntu | 20.04 | rados/perf/{ceph mon_election/connectivity objectstore/bluestore-stupid openstack scheduler/dmclock_1Shard_16Threads settings/optimized ubuntu_latest workloads/radosbench_4K_seq_read} | 1 | |
dead | 7224758 | 2023-03-28 22:51:19 | 2023-03-29 23:58:19 | 2023-03-30 12:10:49 | 12:12:30 | smithi | main | centos | 8.stream | rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/3-size-2-min-size 1-install/nautilus backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/on mon_election/connectivity msgr-failures/few rados thrashers/default thrashosds-health workloads/cache-snaps} | 3 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 7224759 | 2023-03-28 22:51:20 | 2023-03-30 00:23:59 | 1088 | smithi | main | rhel | 8.6 | rados/cephadm/smoke/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop agent/off fixed-2 mon_election/connectivity start} | 2 | ||||
pass | 7224760 | 2023-03-28 22:51:21 | 2023-03-29 23:59:00 | 2023-03-30 00:19:08 | 0:20:08 | 0:12:47 | 0:07:21 | smithi | main | rhel | 8.6 | rados/singleton/{all/max-pg-per-osd.from-mon mon_election/connectivity msgr-failures/none msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{rhel_8}} | 1 | |
dead | 7224761 | 2023-03-28 22:51:23 | 2023-03-30 00:00:01 | 2023-03-30 12:16:38 | 12:16:37 | smithi | main | ubuntu | 20.04 | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/fastclose objectstore/bluestore-comp-lz4 rados recovery-overrides/{more-async-recovery} supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/ec-rados-plugin=jerasure-k=3-m=1} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 7224762 | 2023-03-28 22:51:24 | 2023-03-30 00:05:45 | 2023-03-30 00:50:14 | 0:44:29 | 0:32:36 | 0:11:53 | smithi | main | centos | 8.stream | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-bitmap rados supported-random-distro$/{centos_8} tasks/rados_workunit_loadgen_big} | 2 | |
pass | 7224763 | 2023-03-28 22:51:25 | 2023-03-30 00:06:15 | 2023-03-30 00:28:12 | 0:21:57 | 0:14:45 | 0:07:12 | smithi | main | rhel | 8.6 | rados/cephadm/workunits/{0-distro/rhel_8.6_container_tools_3.0 agent/on mon_election/classic task/test_adoption} | 1 | |
dead | 7224764 | 2023-03-28 22:51:26 | 2023-03-30 00:06:16 | 2023-03-30 12:14:27 | 12:08:11 | smithi | main | rhel | 8.6 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{default} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/fastclose msgr/async objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{rhel_8} thrashers/mapgap thrashosds-health workloads/write_fadvise_dontneed} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 7224765 | 2023-03-28 22:51:27 | 2023-03-30 00:06:36 | 2023-03-30 00:33:32 | 0:26:56 | 0:17:59 | 0:08:57 | smithi | main | centos | 8.stream | rados/singleton-nomsgr/{all/msgr mon_election/classic rados supported-random-distro$/{centos_8}} | 1 | |
pass | 7224766 | 2023-03-28 22:51:29 | 2023-03-30 00:06:36 | 2023-03-30 00:33:11 | 0:26:35 | 0:15:17 | 0:11:18 | smithi | main | ubuntu | 20.04 | rados/standalone/{supported-random-distro$/{ubuntu_latest} workloads/mgr} | 1 | |
fail | 7224767 | 2023-03-28 22:51:30 | 2023-03-30 00:07:17 | 2023-03-30 00:38:02 | 0:30:45 | 0:22:20 | 0:08:25 | smithi | main | centos | 8.stream | rados/valgrind-leaks/{1-start 2-inject-leak/none centos_latest} | 1 | |
Failure Reason:
saw valgrind issues |
||||||||||||||
pass | 7224768 | 2023-03-28 22:51:31 | 2023-03-30 00:07:17 | 2023-03-30 00:32:05 | 0:24:48 | 0:16:27 | 0:08:21 | smithi | main | rhel | 8.6 | rados/singleton/{all/max-pg-per-osd.from-primary mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{rhel_8}} | 1 | |
dead | 7224769 | 2023-03-28 22:51:32 | 2023-03-30 00:08:58 | 2023-03-30 12:21:46 | 12:12:48 | smithi | main | ubuntu | 20.04 | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/few objectstore/bluestore-comp-zlib rados recovery-overrides/{more-active-recovery} supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/ec-rados-plugin=lrc-k=4-m=2-l=3} | 3 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 7224770 | 2023-03-28 22:51:33 | 2023-03-30 00:12:29 | 2023-03-30 00:35:20 | 0:22:51 | 0:11:17 | 0:11:34 | smithi | main | centos | 8.stream | rados/multimon/{clusters/9 mon_election/connectivity msgr-failures/many msgr/async-v2only no_pools objectstore/bluestore-bitmap rados supported-random-distro$/{centos_8} tasks/mon_clock_with_skews} | 3 | |
pass | 7224771 | 2023-03-28 22:51:34 | 2023-03-30 00:13:30 | 2023-03-30 00:46:33 | 0:33:03 | 0:20:40 | 0:12:23 | smithi | main | ubuntu | 20.04 | rados/cephadm/osds/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-ops/rmdir-reactivate} | 2 | |
pass | 7224772 | 2023-03-28 22:51:35 | 2023-03-30 00:13:30 | 2023-03-30 00:48:26 | 0:34:56 | 0:21:33 | 0:13:23 | smithi | main | ubuntu | 20.04 | rados/singleton-nomsgr/{all/multi-backfill-reject mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} | 2 | |
pass | 7224773 | 2023-03-28 22:51:37 | 2023-03-30 00:16:31 | 2023-03-30 00:39:39 | 0:23:08 | 0:17:21 | 0:05:47 | smithi | main | rhel | 8.6 | rados/singleton/{all/max-pg-per-osd.from-replica mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{rhel_8}} | 1 | |
fail | 7224774 | 2023-03-28 22:51:38 | 2023-03-30 00:16:31 | 2023-03-30 02:08:58 | 1:52:27 | 1:36:49 | 0:15:38 | smithi | main | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/few msgr/async-v1only objectstore/bluestore-stupid rados supported-random-distro$/{ubuntu_latest} thrashers/morepggrow thrashosds-health workloads/admin_socket_objecter_requests} | 2 | |
Failure Reason:
reached maximum tries (800) after waiting for 4800 seconds |
||||||||||||||
pass | 7224775 | 2023-03-28 22:51:39 | 2023-03-30 00:19:02 | 2023-03-30 00:42:20 | 0:23:18 | 0:16:50 | 0:06:28 | smithi | main | rhel | 8.6 | rados/cephadm/smoke/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop agent/on fixed-2 mon_election/classic start} | 2 | |
pass | 7224776 | 2023-03-28 22:51:40 | 2023-03-30 00:19:12 | 2023-03-30 00:43:55 | 0:24:43 | 0:09:11 | 0:15:32 | smithi | main | ubuntu | 20.04 | rados/perf/{ceph mon_election/classic objectstore/bluestore-basic-min-osd-mem-target openstack scheduler/dmclock_default_shards settings/optimized ubuntu_latest workloads/radosbench_4M_rand_read} | 1 | |
pass | 7224777 | 2023-03-28 22:51:41 | 2023-03-30 00:24:03 | 2023-03-30 01:08:58 | 0:44:55 | 0:35:47 | 0:09:08 | smithi | main | centos | 8.stream | rados/singleton-bluestore/{all/cephtool mon_election/connectivity msgr-failures/none msgr/async objectstore/bluestore-bitmap rados supported-random-distro$/{centos_8}} | 1 | |
pass | 7224778 | 2023-03-28 22:51:42 | 2023-03-30 00:24:04 | 2023-03-30 00:47:12 | 0:23:08 | 0:14:17 | 0:08:51 | smithi | main | centos | 8.stream | rados/singleton/{all/mon-auth-caps mon_election/classic msgr-failures/none msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{centos_8}} | 1 | |
pass | 7224779 | 2023-03-28 22:51:44 | 2023-03-30 00:24:14 | 2023-03-30 00:56:13 | 0:31:59 | 0:21:35 | 0:10:24 | smithi | main | centos | 8.stream | rados/singleton-nomsgr/{all/osd_stale_reads mon_election/classic rados supported-random-distro$/{centos_8}} | 1 | |
pass | 7224780 | 2023-03-28 22:51:45 | 2023-03-30 00:25:04 | 2023-03-30 00:50:29 | 0:25:25 | 0:14:03 | 0:11:22 | smithi | main | rhel | 8.6 | rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/enable mon_election/connectivity random-objectstore$/{bluestore-low-osd-mem-target} supported-random-distro$/{rhel_8} tasks/workunits} | 2 | |
pass | 7224781 | 2023-03-28 22:51:46 | 2023-03-30 00:28:15 | 2023-03-30 01:03:15 | 0:35:00 | 0:22:25 | 0:12:35 | smithi | main | centos | 8.stream | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8} tasks/rados_workunit_loadgen_mix} | 2 | |
pass | 7224782 | 2023-03-28 22:51:47 | 2023-03-30 00:32:06 | 2023-03-30 00:58:15 | 0:26:09 | 0:18:28 | 0:07:41 | smithi | main | rhel | 8.6 | rados/cephadm/workunits/{0-distro/rhel_8.6_container_tools_rhel8 agent/off mon_election/connectivity task/test_cephadm} | 1 | |
dead | 7224783 | 2023-03-28 22:51:48 | 2023-03-30 00:33:17 | 2023-03-30 12:46:07 | 12:12:50 | smithi | main | ubuntu | 20.04 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/few rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/ec-small-objects-fast-read-overwrites} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
dead | 7224784 | 2023-03-28 22:51:49 | 2023-03-30 00:35:27 | 2023-03-30 12:45:48 | 12:10:21 | smithi | main | ubuntu | 20.04 | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/osd-dispatch-delay objectstore/bluestore-low-osd-mem-target rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 7224785 | 2023-03-28 22:51:51 | 2023-03-30 00:35:28 | 2023-03-30 01:15:07 | 0:39:39 | 0:27:44 | 0:11:55 | smithi | main | rhel | 8.6 | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/fastclose objectstore/bluestore-low-osd-mem-target rados recovery-overrides/{more-active-recovery} supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 | |
Failure Reason:
Command failed on smithi035 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json' |
||||||||||||||
pass | 7224786 | 2023-03-28 22:51:52 | 2023-03-30 00:39:49 | 2023-03-30 01:08:38 | 0:28:49 | 0:15:30 | 0:13:19 | smithi | main | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-delay msgr/async-v2only objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_latest} thrashers/none thrashosds-health workloads/cache-agent-big} | 2 | |
pass | 7224787 | 2023-03-28 22:51:53 | 2023-03-30 00:42:30 | 2023-03-30 01:08:40 | 0:26:10 | 0:11:02 | 0:15:08 | smithi | main | ubuntu | 20.04 | rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/mon-delay msgr/async objectstore/bluestore-comp-zstd rados supported-random-distro$/{ubuntu_latest} thrashers/sync-many workloads/rados_5925} | 2 | |
fail | 7224788 | 2023-03-28 22:51:54 | 2023-03-30 00:46:41 | 2023-03-30 07:19:38 | 6:32:57 | 6:19:53 | 0:13:04 | smithi | main | centos | 8.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-bitmap rados tasks/rados_api_tests validater/lockdep} | 2 | |
Failure Reason:
Command failed (workunit test rados/test.sh) on smithi090 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=e369dd579b62fff69e4b88c7d4fb7419fe60653c TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 ALLOW_TIMEOUTS=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh' |
||||||||||||||
pass | 7224789 | 2023-03-28 22:51:55 | 2023-03-30 00:47:21 | 2023-03-30 01:09:30 | 0:22:09 | 0:15:09 | 0:07:00 | smithi | main | rhel | 8.6 | rados/singleton/{all/mon-config-key-caps mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-bitmap rados supported-random-distro$/{rhel_8}} | 1 | |
pass | 7224790 | 2023-03-28 22:51:57 | 2023-03-30 00:48:32 | 2023-03-30 01:08:30 | 0:19:58 | 0:09:29 | 0:10:29 | smithi | main | ubuntu | 20.04 | rados/singleton-nomsgr/{all/pool-access mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 7224791 | 2023-03-28 22:51:58 | 2023-03-30 00:48:32 | 2023-03-30 01:19:32 | 0:31:00 | 0:16:39 | 0:14:21 | smithi | main | ubuntu | 20.04 | rados/cephadm/osds/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-ops/repave-all} | 2 | |
fail | 7224792 | 2023-03-28 22:51:59 | 2023-03-30 00:50:23 | 2023-03-30 02:50:46 | 2:00:23 | 1:48:58 | 0:11:25 | smithi | main | centos | 8.stream | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/few objectstore/bluestore-comp-snappy rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/ec-radosbench} | 2 | |
Failure Reason:
reached maximum tries (800) after waiting for 4800 seconds |
||||||||||||||
pass | 7224793 | 2023-03-28 22:52:00 | 2023-03-30 00:50:33 | 2023-03-30 01:31:50 | 0:41:17 | 0:31:19 | 0:09:58 | smithi | main | centos | 8.stream | rados/objectstore/{backends/objectcacher-stress supported-random-distro$/{centos_8}} | 1 | |
pass | 7224794 | 2023-03-28 22:52:01 | 2023-03-30 00:50:53 | 2023-03-30 01:14:25 | 0:23:32 | 0:14:37 | 0:08:55 | smithi | main | ubuntu | 20.04 | rados/singleton/{all/mon-config-keys mon_election/classic msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 7224795 | 2023-03-28 22:52:02 | 2023-03-30 00:50:54 | 2023-03-30 01:08:58 | 0:18:04 | 0:07:15 | 0:10:49 | smithi | main | ubuntu | 20.04 | rados/cephadm/workunits/{0-distro/ubuntu_20.04 agent/on mon_election/classic task/test_cephadm_repos} | 1 | |
dead | 7224796 | 2023-03-28 22:52:04 | 2023-03-30 00:50:54 | 2023-03-30 12:58:43 | 12:07:49 | smithi | main | centos | 8.stream | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{default} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-dispatch-delay msgr/async objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8} thrashers/pggrow thrashosds-health workloads/cache-agent-small} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 7224797 | 2023-03-28 22:52:05 | 2023-03-30 00:50:54 | 2023-03-30 01:51:06 | 1:00:12 | 0:46:23 | 0:13:49 | smithi | main | centos | 8.stream | rados/singleton-nomsgr/{all/recovery-unfound-found mon_election/classic rados supported-random-distro$/{centos_8}} | 1 | |
pass | 7224798 | 2023-03-28 22:52:06 | 2023-03-30 00:55:56 | 2023-03-30 02:21:24 | 1:25:28 | 1:15:41 | 0:09:47 | smithi | main | centos | 8.stream | rados/standalone/{supported-random-distro$/{centos_8} workloads/misc} | 1 | |
pass | 7224799 | 2023-03-28 22:52:07 | 2023-03-30 00:56:16 | 2023-03-30 01:16:03 | 0:19:47 | 0:08:54 | 0:10:53 | smithi | main | ubuntu | 20.04 | rados/perf/{ceph mon_election/connectivity objectstore/bluestore-bitmap openstack scheduler/wpq_default_shards settings/optimized ubuntu_latest workloads/radosbench_4M_seq_read} | 1 | |
dead | 7224800 | 2023-03-28 22:52:08 | 2023-03-30 00:58:17 | 2023-03-30 13:14:01 | 12:15:44 | smithi | main | centos | 8.stream | rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/2-size-2-min-size 1-install/octopus backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/crush-compat mon_election/classic msgr-failures/osd-delay rados thrashers/mapgap thrashosds-health workloads/radosbench} | 3 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
dead | 7224801 | 2023-03-28 22:52:10 | 2023-03-30 01:03:18 | 2023-03-30 13:18:18 | 12:15:00 | smithi | main | ubuntu | 20.04 | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/osd-delay objectstore/bluestore-comp-zstd rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} | 3 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 7224802 | 2023-03-28 22:52:11 | 2023-03-30 01:08:39 | 2023-03-30 01:52:08 | 0:43:29 | 0:28:54 | 0:14:35 | smithi | main | centos | 8.stream | rados/multimon/{clusters/21 mon_election/classic msgr-failures/few msgr/async no_pools objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8} tasks/mon_recovery} | 3 | |
pass | 7224803 | 2023-03-28 22:52:12 | 2023-03-30 01:08:49 | 2023-03-30 01:43:04 | 0:34:15 | 0:24:15 | 0:10:00 | smithi | main | ubuntu | 20.04 | rados/cephadm/smoke/{0-distro/ubuntu_20.04 0-nvme-loop agent/off fixed-2 mon_election/connectivity start} | 2 | |
pass | 7224804 | 2023-03-28 22:52:13 | 2023-03-30 01:09:00 | 2023-03-30 01:29:30 | 0:20:30 | 0:10:28 | 0:10:02 | smithi | main | ubuntu | 20.04 | rados/singleton/{all/mon-config mon_election/connectivity msgr-failures/none msgr/async-v2only objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 7224805 | 2023-03-28 22:52:14 | 2023-03-30 01:09:40 | 2023-03-30 01:50:10 | 0:40:30 | 0:28:38 | 0:11:52 | smithi | main | rhel | 8.6 | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-snappy rados supported-random-distro$/{rhel_8} tasks/rados_workunit_loadgen_mostlyread} | 2 | |
pass | 7224806 | 2023-03-28 22:52:15 | 2023-03-30 01:15:11 | 2023-03-30 01:38:04 | 0:22:53 | 0:15:48 | 0:07:05 | smithi | main | rhel | 8.6 | rados/singleton-nomsgr/{all/version-number-sanity mon_election/connectivity rados supported-random-distro$/{rhel_8}} | 1 | |
dead | 7224807 | 2023-03-28 22:52:17 | 2023-03-30 01:15:12 | 2023-03-30 13:24:23 | 12:09:11 | smithi | main | centos | 8.stream | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/fastclose msgr/async-v1only objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/cache-pool-snaps-readproxy} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 7224808 | 2023-03-28 22:52:18 | 2023-03-30 01:15:12 | 2023-03-30 01:41:11 | 0:25:59 | 0:15:57 | 0:10:02 | smithi | main | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools agent/off mon_election/connectivity task/test_iscsi_pids_limit/{centos_8.stream_container_tools test_iscsi_pids_limit}} | 1 | |
pass | 7224809 | 2023-03-28 22:52:19 | 2023-03-30 01:16:12 | 2023-03-30 02:31:48 | 1:15:36 | 0:58:41 | 0:16:55 | smithi | main | centos | 8.stream | rados/singleton/{all/osd-backfill mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_8}} | 1 | |
pass | 7224810 | 2023-03-28 22:52:20 | 2023-03-30 01:19:33 | 2023-03-30 01:52:47 | 0:33:14 | 0:14:05 | 0:19:09 | smithi | main | centos | 8.stream | rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/enable mon_election/classic random-objectstore$/{bluestore-comp-zstd} supported-random-distro$/{centos_8} tasks/crash} | 2 | |
fail | 7224811 | 2023-03-28 22:52:21 | 2023-03-30 01:50:16 | 2023-03-30 02:08:24 | 0:18:08 | 0:06:19 | 0:11:49 | smithi | main | ubuntu | 20.04 | rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/radosbench cluster/3-node k8s/1.21 net/calico rook/1.7.2} | 3 | |
Failure Reason:
Command failed on smithi007 with status 1: 'sudo systemctl enable --now kubelet && sudo kubeadm config images pull' |
||||||||||||||
pass | 7224812 | 2023-03-28 22:52:22 | 2023-03-30 01:52:17 | 2023-03-30 03:19:35 | 1:27:18 | 1:17:37 | 0:09:41 | smithi | main | centos | 8.stream | rados/dashboard/{0-single-container-host debug/mgr mon_election/connectivity random-objectstore$/{bluestore-bitmap} tasks/dashboard} | 2 | |
pass | 7224813 | 2023-03-28 22:52:24 | 2023-03-30 01:52:17 | 2023-03-30 02:21:23 | 0:29:06 | 0:22:02 | 0:07:04 | smithi | main | rhel | 8.6 | rados/singleton-nomsgr/{all/admin_socket_output mon_election/connectivity rados supported-random-distro$/{rhel_8}} | 1 | |
dead | 7224814 | 2023-03-28 22:52:25 | 2023-03-30 01:52:47 | 2023-03-30 14:13:30 | 12:20:43 | smithi | main | ubuntu | 20.04 | rados/upgrade/parallel/{0-random-distro$/{ubuntu_20.04} 0-start 1-tasks mon_election/connectivity upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 7224815 | 2023-03-28 22:52:26 | 2023-03-30 02:00:09 | 2023-03-30 02:32:28 | 0:32:19 | 0:19:25 | 0:12:54 | smithi | main | centos | 8.stream | rados/cephadm/osds/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-ops/rm-zap-add} | 2 | |
pass | 7224816 | 2023-03-28 22:52:27 | 2023-03-30 02:03:20 | 2023-03-30 02:49:20 | 0:46:00 | 0:36:14 | 0:09:46 | smithi | main | ubuntu | 20.04 | rados/singleton/{all/osd-recovery-incomplete mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-zstd rados supported-random-distro$/{ubuntu_latest}} | 1 | |
dead | 7224817 | 2023-03-28 22:52:28 | 2023-03-30 02:03:20 | 2023-03-30 14:14:00 | 12:10:40 | smithi | main | centos | 8.stream | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{default} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/cache-pool-snaps} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
dead | 7224818 | 2023-03-28 22:52:30 | 2023-03-30 02:04:01 | 2023-03-30 14:16:29 | 12:12:28 | smithi | main | rhel | 8.6 | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/fastclose objectstore/bluestore-stupid rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{rhel_8} thrashers/default thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
dead | 7224819 | 2023-03-28 22:52:31 | 2023-03-30 02:08:01 | 2023-03-30 14:21:08 | 12:13:07 | smithi | main | rhel | 8.6 | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/connectivity msgr-failures/few objectstore/bluestore-stupid rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{rhel_8} thrashers/default thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 7224820 | 2023-03-28 22:52:32 | 2023-03-30 02:09:02 | 2023-03-30 02:27:01 | 0:17:59 | 0:08:57 | 0:09:02 | smithi | main | ubuntu | 20.04 | rados/perf/{ceph mon_election/classic objectstore/bluestore-comp openstack scheduler/dmclock_1Shard_16Threads settings/optimized ubuntu_latest workloads/radosbench_4M_write} | 1 | |
fail | 7224821 | 2023-03-28 22:52:33 | 2023-03-30 02:09:02 | 2023-03-30 05:35:45 | 3:26:43 | 3:17:57 | 0:08:46 | smithi | main | rhel | 8.6 | rados/monthrash/{ceph clusters/9-mons mon_election/connectivity msgr-failures/few msgr/async-v1only objectstore/bluestore-hybrid rados supported-random-distro$/{rhel_8} thrashers/sync workloads/rados_api_tests} | 2 | |
Failure Reason:
Command failed (workunit test rados/test.sh) on smithi083 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=e369dd579b62fff69e4b88c7d4fb7419fe60653c TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh' |
||||||||||||||
fail | 7224822 | 2023-03-28 22:52:34 | 2023-03-30 02:10:23 | 2023-03-30 03:48:33 | 1:38:10 | 1:16:42 | 0:21:28 | smithi | main | centos | 8.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-lz4 rados tasks/rados_cls_all validater/valgrind} | 2 | |
Failure Reason:
Command failed (workunit test cls/test_cls_2pc_queue.sh) on smithi093 with status 139: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=e369dd579b62fff69e4b88c7d4fb7419fe60653c TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_2pc_queue.sh' |
||||||||||||||
pass | 7224823 | 2023-03-28 22:52:36 | 2023-03-30 02:21:25 | 2023-03-30 02:51:22 | 0:29:57 | 0:15:59 | 0:13:58 | smithi | main | centos | 8.stream | rados/cephadm/smoke/{0-distro/centos_8.stream_container_tools 0-nvme-loop agent/off fixed-2 mon_election/classic start} | 2 | |
pass | 7224824 | 2023-03-28 22:52:37 | 2023-03-30 02:26:36 | 2023-03-30 02:48:13 | 0:21:37 | 0:12:56 | 0:08:41 | smithi | main | centos | 8.stream | rados/singleton-nomsgr/{all/balancer mon_election/classic rados supported-random-distro$/{centos_8}} | 1 | |
dead | 7224825 | 2023-03-28 22:52:38 | 2023-03-30 02:27:06 | 2023-03-30 14:43:26 | 12:16:20 | smithi | main | rhel | 8.6 | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/osd-delay objectstore/bluestore-comp-zlib rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{rhel_8} thrashers/fastread thrashosds-health workloads/ec-small-objects-balanced} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 7224826 | 2023-03-28 22:52:39 | 2023-03-30 02:32:37 | 2023-03-30 03:05:46 | 0:33:09 | 0:23:34 | 0:09:35 | smithi | main | centos | 8.stream | rados/singleton/{all/osd-recovery mon_election/classic msgr-failures/none msgr/async-v2only objectstore/bluestore-hybrid rados supported-random-distro$/{centos_8}} | 1 | |
dead | 7224827 | 2023-03-28 22:52:40 | 2023-03-30 02:32:38 | 2023-03-30 14:59:28 | 12:26:50 | smithi | main | ubuntu | 20.04 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/osd-delay rados recovery-overrides/{default} supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/ec-small-objects-overwrites} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 7224828 | 2023-03-28 22:52:41 | 2023-03-30 02:50:49 | 2023-03-30 03:15:22 | 0:24:33 | 0:13:05 | 0:11:28 | smithi | main | ubuntu | 20.04 | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async objectstore/bluestore-comp-zlib rados supported-random-distro$/{ubuntu_latest} tasks/readwrite} | 2 | |
pass | 7224829 | 2023-03-28 22:52:43 | 2023-03-30 02:51:30 | 2023-03-30 03:19:15 | 0:27:45 | 0:17:02 | 0:10:43 | smithi | main | ubuntu | 20.04 | rados/cephadm/smoke-singlehost/{0-random-distro$/{ubuntu_20.04} 1-start 2-services/basic 3-final} | 1 | |
dead | 7224830 | 2023-03-28 22:52:44 | 2023-03-30 02:51:30 | 2023-03-30 15:05:35 | 12:14:05 | smithi | main | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-delay msgr/async objectstore/bluestore-comp-zstd rados supported-random-distro$/{ubuntu_latest} thrashers/mapgap thrashosds-health workloads/cache-snaps-balanced} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 7224831 | 2023-03-28 22:52:45 | 2023-03-30 02:56:01 | 2023-03-30 03:24:21 | 0:28:20 | 0:13:55 | 0:14:25 | smithi | main | centos | 8.stream | rados/singleton-nomsgr/{all/cache-fs-trunc mon_election/connectivity rados supported-random-distro$/{centos_8}} | 1 | |
pass | 7224832 | 2023-03-28 22:52:46 | 2023-03-30 03:01:22 | 2023-03-30 04:26:26 | 1:25:04 | 1:16:51 | 0:08:13 | smithi | main | centos | 8.stream | rados/standalone/{supported-random-distro$/{centos_8} workloads/mon} | 1 | |
dead | 7224833 | 2023-03-28 22:52:47 | 2023-03-30 03:01:22 | 2023-03-30 15:19:26 | 12:18:04 | smithi | main | ubuntu | 20.04 | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-hybrid rados recovery-overrides/{default} supported-random-distro$/{ubuntu_latest} thrashers/fastread thrashosds-health workloads/ec-rados-plugin=lrc-k=4-m=2-l=3} | 3 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 7224834 | 2023-03-28 22:52:48 | 2023-03-30 03:08:44 | 2023-03-30 03:36:03 | 0:27:19 | 0:12:30 | 0:14:49 | smithi | main | centos | 8.stream | rados/singleton/{all/peer mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_8}} | 1 | |
pass | 7224835 | 2023-03-28 22:52:49 | 2023-03-30 03:15:25 | 2023-03-30 03:37:14 | 0:21:49 | 0:08:32 | 0:13:17 | smithi | main | ubuntu | 20.04 | rados/multimon/{clusters/3 mon_election/connectivity msgr-failures/many msgr/async-v1only no_pools objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest} tasks/mon_clock_no_skews} | 2 | |
fail | 7224836 | 2023-03-28 22:52:50 | 2023-03-30 03:19:16 | 2023-03-30 04:08:55 | 0:49:39 | 0:40:37 | 0:09:02 | smithi | main | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools_crun agent/on mon_election/classic task/test_nfs} | 1 | |
Failure Reason:
Test failure: test_update_export_with_invalid_values (tasks.cephfs.test_nfs.TestNFS) |
||||||||||||||
pass | 7224837 | 2023-03-28 22:52:52 | 2023-03-30 03:19:36 | 2023-03-30 05:59:10 | 2:39:34 | 2:16:42 | 0:22:52 | smithi | main | centos | 8.stream | rados/objectstore/{backends/objectstore-bluestore-a supported-random-distro$/{centos_8}} | 1 | |
pass | 7224838 | 2023-03-28 22:52:53 | 2023-03-30 03:19:37 | 2023-03-30 03:43:25 | 0:23:48 | 0:13:33 | 0:10:15 | smithi | main | rhel | 8.6 | rados/singleton-nomsgr/{all/ceph-kvstore-tool mon_election/classic rados supported-random-distro$/{rhel_8}} | 1 | |
pass | 7224839 | 2023-03-28 22:52:54 | 2023-03-30 03:23:38 | 2023-03-30 03:58:36 | 0:34:58 | 0:24:27 | 0:10:31 | smithi | main | ubuntu | 20.04 | rados/perf/{ceph mon_election/connectivity objectstore/bluestore-low-osd-mem-target openstack scheduler/dmclock_default_shards settings/optimized ubuntu_latest workloads/radosbench_omap_write} | 1 | |
dead | 7224840 | 2023-03-28 22:52:55 | 2023-03-30 03:24:28 | 2023-03-30 15:47:50 | 12:23:22 | smithi | main | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-dispatch-delay msgr/async-v1only objectstore/bluestore-hybrid rados supported-random-distro$/{ubuntu_latest} thrashers/morepggrow thrashosds-health workloads/cache-snaps} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 7224841 | 2023-03-28 22:52:56 | 2023-03-30 03:37:20 | 2023-03-30 04:10:45 | 0:33:25 | 0:14:56 | 0:18:29 | smithi | main | ubuntu | 20.04 | rados/singleton/{all/pg-autoscaler-progress-off mon_election/classic msgr-failures/many msgr/async-v1only objectstore/bluestore-stupid rados supported-random-distro$/{ubuntu_latest}} | 2 | |
pass | 7224842 | 2023-03-28 22:52:57 | 2023-03-30 03:43:31 | 2023-03-30 04:34:02 | 0:50:31 | 0:21:20 | 0:29:11 | smithi | main | centos | 8.stream | rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/3-size-2-min-size 1-install/pacific backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/on mon_election/connectivity msgr-failures/fastclose rados thrashers/morepggrow thrashosds-health workloads/rbd_cls} | 3 | |
pass | 7224843 | 2023-03-28 22:52:58 | 2023-03-30 04:09:04 | 2023-03-30 04:34:48 | 0:25:44 | 0:14:05 | 0:11:39 | smithi | main | centos | 8.stream | rados/cephadm/osds/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop 1-start 2-ops/rm-zap-flag} | 2 | |
pass | 7224844 | 2023-03-28 22:52:59 | 2023-03-30 04:10:54 | 2023-03-30 04:58:16 | 0:47:22 | 0:13:16 | 0:34:06 | smithi | main | ubuntu | 20.04 | rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/disable mon_election/connectivity random-objectstore$/{bluestore-hybrid} supported-random-distro$/{ubuntu_latest} tasks/failover} | 2 | |
pass | 7224845 | 2023-03-28 22:53:01 | 2023-03-30 13:12:47 | 2023-03-30 13:34:27 | 0:21:40 | 0:11:05 | 0:10:35 | smithi | main | centos | 8.stream | rados/singleton-nomsgr/{all/ceph-post-file mon_election/connectivity rados supported-random-distro$/{centos_8}} | 1 | |
pass | 7224846 | 2023-03-28 22:53:02 | 2023-03-30 13:12:47 | 2023-03-30 13:36:36 | 0:23:49 | 0:14:45 | 0:09:04 | smithi | main | ubuntu | 20.04 | rados/singleton/{all/pg-autoscaler mon_election/connectivity msgr-failures/none msgr/async-v2only objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 7224847 | 2023-03-28 22:53:03 | 2023-03-30 13:12:48 | 2023-03-30 13:38:05 | 0:25:17 | 0:16:25 | 0:08:52 | smithi | main | centos | 8.stream | rados/cephadm/smoke/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} | 2 | |
pass | 7224848 | 2023-03-28 22:53:04 | 2023-03-30 13:12:48 | 2023-03-30 13:56:19 | 0:43:31 | 0:37:14 | 0:06:17 | smithi | main | rhel | 8.6 | rados/singleton-bluestore/{all/cephtool mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8}} | 1 | |
pass | 7224849 | 2023-03-28 22:53:05 | 2023-03-30 13:13:18 | 2023-03-30 13:35:01 | 0:21:43 | 0:15:26 | 0:06:17 | smithi | main | rhel | 8.6 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/fastclose msgr/async-v2only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{rhel_8} thrashers/none thrashosds-health workloads/cache} | 2 | |
dead | 7224850 | 2023-03-28 22:53:06 | 2023-03-30 13:13:29 | 2023-03-31 01:25:33 | 12:12:04 | smithi | main | ubuntu | 20.04 | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few objectstore/bluestore-bitmap rados recovery-overrides/{more-active-recovery} supported-random-distro$/{ubuntu_latest} thrashers/mapgap thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
dead | 7224851 | 2023-03-28 22:53:07 | 2023-03-30 13:16:20 | 2023-03-31 01:27:09 | 12:10:49 | smithi | main | centos | 8.stream | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/few objectstore/bluestore-bitmap rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 7224852 | 2023-03-28 22:53:08 | 2023-03-30 13:17:10 | 2023-03-30 13:46:09 | 0:28:59 | 0:18:00 | 0:10:59 | smithi | main | ubuntu | 20.04 | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-zstd rados supported-random-distro$/{ubuntu_latest} tasks/repair_test} | 2 | |
pass | 7224853 | 2023-03-28 22:53:10 | 2023-03-30 13:35:23 | 2023-03-30 14:27:49 | 0:52:26 | 0:41:41 | 0:10:45 | smithi | main | ubuntu | 20.04 | rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/mon-delay msgr/async-v2only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{ubuntu_latest} thrashers/force-sync-many workloads/rados_mon_osdmap_prune} | 2 | |
pass | 7224854 | 2023-03-28 22:53:11 | 2023-03-30 13:35:54 | 2023-03-30 14:01:33 | 0:25:39 | 0:15:11 | 0:10:28 | smithi | main | centos | 8.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-snappy rados tasks/mon_recovery validater/lockdep} | 2 | |
pass | 7224855 | 2023-03-28 22:53:12 | 2023-03-30 13:36:24 | 2023-03-30 14:02:20 | 0:25:56 | 0:18:15 | 0:07:41 | smithi | main | rhel | 8.6 | rados/cephadm/workunits/{0-distro/rhel_8.6_container_tools_3.0 agent/off mon_election/connectivity task/test_orch_cli} | 1 | |
pass | 7224856 | 2023-03-28 22:53:13 | 2023-03-30 13:36:45 | 2023-03-30 13:58:15 | 0:21:30 | 0:09:12 | 0:12:18 | smithi | main | ubuntu | 20.04 | rados/singleton/{all/pg-removal-interruption mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-lz4 rados supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 7224857 | 2023-03-28 22:53:14 | 2023-03-30 13:38:05 | 2023-03-30 13:59:26 | 0:21:21 | 0:12:43 | 0:08:38 | smithi | main | centos | 8.stream | rados/singleton-nomsgr/{all/crushdiff mon_election/classic rados supported-random-distro$/{centos_8}} | 1 | |
dead | 7224858 | 2023-03-28 22:53:15 | 2023-03-30 13:38:06 | 2023-03-31 01:47:40 | 12:09:34 | smithi | main | rhel | 8.6 | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-comp-zstd rados recovery-overrides/{more-async-recovery} supported-random-distro$/{rhel_8} thrashers/minsize_recovery thrashosds-health workloads/ec-small-objects-fast-read} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 7224859 | 2023-03-28 22:53:16 | 2023-03-30 13:38:06 | 2023-03-30 13:59:26 | 0:21:20 | 0:10:51 | 0:10:29 | smithi | main | ubuntu | 20.04 | rados/perf/{ceph mon_election/classic objectstore/bluestore-stupid openstack scheduler/wpq_default_shards settings/optimized ubuntu_latest workloads/sample_fio} | 1 | |
pass | 7224860 | 2023-03-28 22:53:18 | 2023-03-30 14:04:25 | 978 | smithi | main | rhel | 8.6 | rados/cephadm/osds/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop 1-start 2-ops/rm-zap-wait} | 2 | ||||
fail | 7224861 | 2023-03-28 22:53:19 | 2023-03-30 13:41:17 | 2023-03-30 14:10:37 | 0:29:20 | 0:17:44 | 0:11:36 | smithi | main | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-stupid rados supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/dedup-io-mixed} | 2 | |
Failure Reason:
Command failed on smithi008 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json' |
||||||||||||||
pass | 7224862 | 2023-03-28 22:53:20 | 2023-03-30 13:42:38 | 2023-03-30 14:15:22 | 0:32:44 | 0:26:52 | 0:05:52 | smithi | main | rhel | 8.6 | rados/singleton/{all/radostool mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-snappy rados supported-random-distro$/{rhel_8}} | 1 | |
pass | 7224863 | 2023-03-28 22:53:21 | 2023-03-30 13:42:38 | 2023-03-30 14:04:49 | 0:22:11 | 0:11:19 | 0:10:52 | smithi | main | centos | 8.stream | rados/singleton-nomsgr/{all/export-after-evict mon_election/connectivity rados supported-random-distro$/{centos_8}} | 1 | |
fail | 7224864 | 2023-03-28 22:53:22 | 2023-03-30 13:43:49 | 2023-03-30 14:47:00 | 1:03:11 | 0:57:03 | 0:06:08 | smithi | main | rhel | 8.6 | rados/standalone/{supported-random-distro$/{rhel_8} workloads/osd-backfill} | 1 | |
Failure Reason:
Command failed (workunit test osd-backfill/osd-backfill-space.sh) on smithi121 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=e369dd579b62fff69e4b88c7d4fb7419fe60653c TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/osd-backfill/osd-backfill-space.sh' |
||||||||||||||
pass | 7224865 | 2023-03-28 22:53:23 | 2023-03-30 13:43:49 | 2023-03-30 14:15:27 | 0:31:38 | 0:22:03 | 0:09:35 | smithi | main | centos | 8.stream | rados/valgrind-leaks/{1-start 2-inject-leak/osd centos_latest} | 1 | |
dead | 7224866 | 2023-03-28 22:53:25 | 2023-03-30 13:44:19 | 2023-03-31 01:54:49 | 12:10:30 | smithi | main | ubuntu | 20.04 | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/fastclose objectstore/bluestore-low-osd-mem-target rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/mapgap thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} | 3 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 7224867 | 2023-03-28 22:53:26 | 2023-03-30 13:44:30 | 2023-03-30 14:11:58 | 0:27:28 | 0:18:08 | 0:09:20 | smithi | main | rhel | 8.6 | rados/cephadm/smoke/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop agent/off fixed-2 mon_election/classic start} | 2 | |
dead | 7224868 | 2023-03-28 22:53:27 | 2023-03-30 13:46:11 | 2023-03-31 01:57:21 | 12:11:10 | smithi | main | ubuntu | 20.04 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/osd-dispatch-delay rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/fastread thrashosds-health workloads/ec-snaps-few-objects-overwrites} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 7224869 | 2023-03-28 22:53:28 | 2023-03-30 13:46:31 | 2023-03-30 14:06:25 | 0:19:54 | 0:08:37 | 0:11:17 | smithi | main | ubuntu | 20.04 | rados/multimon/{clusters/6 mon_election/classic msgr-failures/few msgr/async-v2only no_pools objectstore/bluestore-comp-zlib rados supported-random-distro$/{ubuntu_latest} tasks/mon_clock_with_skews} | 2 | |
pass | 7224870 | 2023-03-28 22:53:29 | 2023-03-30 13:48:32 | 2023-03-30 14:24:14 | 0:35:42 | 0:27:53 | 0:07:49 | smithi | main | rhel | 8.6 | rados/singleton/{all/random-eio mon_election/classic msgr-failures/none msgr/async-v2only objectstore/bluestore-comp-zlib rados supported-random-distro$/{rhel_8}} | 2 | |
pass | 7224871 | 2023-03-28 22:53:30 | 2023-03-30 13:49:22 | 2023-03-30 14:10:25 | 0:21:03 | 0:12:08 | 0:08:55 | smithi | main | centos | 8.stream | rados/singleton-nomsgr/{all/full-tiering mon_election/classic rados supported-random-distro$/{centos_8}} | 1 |