User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
lflores | 2022-12-13 23:14:56 | 2022-12-14 05:43:39 | 2022-12-14 21:38:57 | 15:55:18 | rados | main | smithi | 18874b7 | 255 | 58 | 7 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
pass | 7115058 | 2022-12-13 23:16:10 | 2022-12-14 05:43:39 | 2022-12-14 06:03:00 | 0:19:21 | 0:08:49 | 0:10:32 | smithi | main | ubuntu | 20.04 | rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/disable mon_election/connectivity random-objectstore$/{bluestore-bitmap} supported-random-distro$/{ubuntu_latest} tasks/workunits} | 2 | |
pass | 7115059 | 2022-12-13 23:16:11 | 2022-12-14 05:43:39 | 2022-12-14 06:04:29 | 0:20:50 | 0:09:59 | 0:10:51 | smithi | main | ubuntu | 20.04 | rados/perf/{ceph mon_election/classic objectstore/bluestore-comp openstack scheduler/dmclock_1Shard_16Threads settings/optimized ubuntu_latest workloads/fio_4M_rand_rw} | 1 | |
pass | 7115060 | 2022-12-13 23:16:12 | 2022-12-14 05:45:00 | 2022-12-14 06:13:17 | 0:28:17 | 0:17:58 | 0:10:19 | smithi | main | ubuntu | 20.04 | rados/singleton/{all/osd-recovery mon_election/connectivity msgr-failures/none msgr/async-v2only objectstore/bluestore-hybrid rados supported-random-distro$/{ubuntu_latest}} | 1 | |
fail | 7115061 | 2022-12-13 23:16:14 | 2022-12-14 05:46:01 | 2022-12-14 06:18:43 | 0:32:42 | 0:25:11 | 0:07:31 | smithi | main | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools_crun agent/off mon_election/classic task/test_orch_cli_mon} | 5 | |
Failure Reason:
"/var/log/ceph/1e82beb6-7b75-11ed-8441-001a4aab830c/ceph-mon.a.log:2022-12-14T06:11:29.289+0000 7faa0d7e3700 0 log_channel(cluster) log [WRN] : Health check failed: Reduced data availability: 1 pg inactive, 1 pg peering (PG_AVAILABILITY)" in cluster log |
||||||||||||||
pass | 7115062 | 2022-12-13 23:16:15 | 2022-12-14 05:47:41 | 2022-12-14 06:22:13 | 0:34:32 | 0:23:03 | 0:11:29 | smithi | main | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{default} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-dispatch-delay msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{ubuntu_latest} thrashers/none thrashosds-health workloads/radosbench-high-concurrency} | 2 | |
pass | 7115063 | 2022-12-13 23:16:16 | 2022-12-14 05:48:02 | 2022-12-14 06:27:33 | 0:39:31 | 0:32:09 | 0:07:22 | smithi | main | centos | 8.stream | rados/objectstore/{backends/objectstore-filestore-memstore supported-random-distro$/{centos_8}} | 1 | |
pass | 7115064 | 2022-12-13 23:16:17 | 2022-12-14 05:48:22 | 2022-12-14 06:39:58 | 0:51:36 | 0:41:11 | 0:10:25 | smithi | main | ubuntu | 20.04 | rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/mon-delay msgr/async-v1only objectstore/bluestore-stupid rados supported-random-distro$/{ubuntu_latest} thrashers/sync workloads/rados_mon_osdmap_prune} | 2 | |
fail | 7115065 | 2022-12-13 23:16:19 | 2022-12-14 05:48:33 | 2022-12-14 06:25:30 | 0:36:57 | 0:27:50 | 0:09:07 | smithi | main | rhel | 8.6 | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{rhel_8} tasks/rados_api_tests} | 2 | |
Failure Reason:
Command failed (workunit test rados/test.sh) on smithi032 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=570ede77572a3e5feb912523281e50b9e1e2539f TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh' |
||||||||||||||
pass | 7115066 | 2022-12-13 23:16:20 | 2022-12-14 05:50:23 | 2022-12-14 06:30:06 | 0:39:43 | 0:32:16 | 0:07:27 | smithi | main | centos | 8.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-stupid rados tasks/mon_recovery validater/valgrind} | 2 | |
pass | 7115067 | 2022-12-13 23:16:21 | 2022-12-14 05:51:14 | 2022-12-14 06:26:38 | 0:35:24 | 0:24:55 | 0:10:29 | smithi | main | ubuntu | 20.04 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/few rados recovery-overrides/{default} supported-random-distro$/{ubuntu_latest} thrashers/minsize_recovery thrashosds-health workloads/ec-pool-snaps-few-objects-overwrites} | 2 | |
pass | 7115068 | 2022-12-13 23:16:22 | 2022-12-14 05:51:14 | 2022-12-14 06:10:44 | 0:19:30 | 0:11:42 | 0:07:48 | smithi | main | centos | 8.stream | rados/singleton-nomsgr/{all/pool-access mon_election/classic rados supported-random-distro$/{centos_8}} | 1 | |
fail | 7115069 | 2022-12-13 23:16:23 | 2022-12-14 05:51:45 | 2022-12-14 06:24:35 | 0:32:50 | 0:18:16 | 0:14:34 | smithi | main | ubuntu | 20.04 | rados/cephadm/osds/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-ops/rm-zap-add} | 2 | |
Failure Reason:
"/var/log/ceph/e56a42d8-7b75-11ed-8441-001a4aab830c/ceph-mon.smithi135.log:2022-12-14T06:19:43.188+0000 7f31f0cd5700 0 log_channel(cluster) log [WRN] : Health check failed: 1 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
pass | 7115070 | 2022-12-13 23:16:25 | 2022-12-14 05:53:05 | 2022-12-14 06:11:46 | 0:18:41 | 0:11:52 | 0:06:49 | smithi | main | centos | 8.stream | rados/singleton/{all/peer mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_8}} | 1 | |
pass | 7115071 | 2022-12-13 23:16:26 | 2022-12-14 05:53:06 | 2022-12-14 06:14:49 | 0:21:43 | 0:12:14 | 0:09:29 | smithi | main | ubuntu | 20.04 | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/few objectstore/bluestore-stupid rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 | |
pass | 7115072 | 2022-12-13 23:16:27 | 2022-12-14 05:53:26 | 2022-12-14 06:56:47 | 1:03:21 | 0:56:04 | 0:07:17 | smithi | main | centos | 8.stream | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{default} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8} thrashers/pggrow thrashosds-health workloads/radosbench} | 2 | |
pass | 7115073 | 2022-12-13 23:16:28 | 2022-12-14 05:53:26 | 2022-12-14 06:36:47 | 0:43:21 | 0:35:31 | 0:07:50 | smithi | main | centos | 8.stream | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/filestore-xfs rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{centos_8} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} | 3 | |
pass | 7115074 | 2022-12-13 23:16:29 | 2022-12-14 05:53:47 | 2022-12-14 06:14:47 | 0:21:00 | 0:14:37 | 0:06:23 | smithi | main | rhel | 8.6 | rados/cephadm/workunits/{0-distro/rhel_8.6_container_tools_3.0 agent/on mon_election/connectivity task/test_adoption} | 1 | |
pass | 7115075 | 2022-12-13 23:16:31 | 2022-12-14 05:53:57 | 2022-12-14 06:32:54 | 0:38:57 | 0:31:53 | 0:07:04 | smithi | main | centos | 8.stream | rados/singleton-nomsgr/{all/recovery-unfound-found mon_election/connectivity rados supported-random-distro$/{centos_8}} | 1 | |
pass | 7115076 | 2022-12-13 23:16:32 | 2022-12-14 05:53:57 | 2022-12-14 06:19:21 | 0:25:24 | 0:15:08 | 0:10:16 | smithi | main | ubuntu | 20.04 | rados/singleton/{all/pg-autoscaler-progress-off mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-stupid rados supported-random-distro$/{ubuntu_latest}} | 2 | |
dead | 7115077 | 2022-12-13 23:16:33 | 2022-12-14 05:53:58 | 2022-12-14 18:07:34 | 12:13:36 | smithi | main | ubuntu | 20.04 | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/osd-delay objectstore/bluestore-hybrid rados recovery-overrides/{more-active-recovery} supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=clay-k=4-m=2} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 7115078 | 2022-12-13 23:16:34 | 2022-12-14 05:54:38 | 2022-12-14 06:20:21 | 0:25:43 | 0:18:42 | 0:07:01 | smithi | main | rhel | 8.6 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{default} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/redirect} | 2 | |
fail | 7115079 | 2022-12-13 23:16:36 | 2022-12-14 05:55:39 | 2022-12-14 06:33:35 | 0:37:56 | 0:24:13 | 0:13:43 | smithi | main | ubuntu | 20.04 | rados/cephadm/smoke/{0-distro/ubuntu_20.04 0-nvme-loop agent/off fixed-2 mon_election/connectivity start} | 2 | |
Failure Reason:
"/var/log/ceph/457fedc6-7b76-11ed-8441-001a4aab830c/ceph-mon.a.log:2022-12-14T06:15:52.795+0000 7f6ecc272700 0 log_channel(cluster) log [WRN] : Health check failed: 1/3 mons down, quorum a,b (MON_DOWN)" in cluster log |
||||||||||||||
pass | 7115080 | 2022-12-13 23:16:37 | 2022-12-14 05:57:20 | 2022-12-14 06:16:16 | 0:18:56 | 0:10:31 | 0:08:25 | smithi | main | ubuntu | 20.04 | rados/perf/{ceph mon_election/connectivity objectstore/bluestore-low-osd-mem-target openstack scheduler/dmclock_default_shards settings/optimized ubuntu_latest workloads/fio_4M_rand_write} | 1 | |
pass | 7115081 | 2022-12-13 23:16:38 | 2022-12-14 05:57:20 | 2022-12-14 06:27:20 | 0:30:00 | 0:22:55 | 0:07:05 | smithi | main | centos | 8.stream | rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/3-size-2-min-size 1-install/nautilus backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/on mon_election/connectivity msgr-failures/few rados thrashers/pggrow thrashosds-health workloads/test_rbd_api} | 3 | |
pass | 7115082 | 2022-12-13 23:16:39 | 2022-12-14 05:57:30 | 2022-12-14 06:20:53 | 0:23:23 | 0:13:46 | 0:09:37 | smithi | main | ubuntu | 20.04 | rados/singleton/{all/pg-autoscaler mon_election/classic msgr-failures/none msgr/async-v2only objectstore/filestore-xfs rados supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 7115083 | 2022-12-13 23:16:41 | 2022-12-14 05:57:31 | 2022-12-14 06:30:10 | 0:32:39 | 0:22:31 | 0:10:08 | smithi | main | ubuntu | 20.04 | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/osd-dispatch-delay objectstore/filestore-xfs rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/none thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 | |
pass | 7115084 | 2022-12-13 23:16:42 | 2022-12-14 05:58:41 | 2022-12-14 06:20:06 | 0:21:25 | 0:14:37 | 0:06:48 | smithi | main | centos | 8.stream | rados/singleton-nomsgr/{all/version-number-sanity mon_election/classic rados supported-random-distro$/{centos_8}} | 1 | |
pass | 7115085 | 2022-12-13 23:16:43 | 2022-12-14 05:58:52 | 2022-12-14 06:41:44 | 0:42:52 | 0:36:42 | 0:06:10 | smithi | main | rhel | 8.6 | rados/singleton-bluestore/{all/cephtool mon_election/connectivity msgr-failures/none msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{rhel_8}} | 1 | |
pass | 7115086 | 2022-12-13 23:16:45 | 2022-12-14 05:58:52 | 2022-12-14 06:24:34 | 0:25:42 | 0:19:42 | 0:06:00 | smithi | main | rhel | 8.6 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{rhel_8} thrashers/default thrashosds-health workloads/redirect_promote_tests} | 2 | |
fail | 7115087 | 2022-12-13 23:16:46 | 2022-12-14 05:59:23 | 2022-12-14 06:20:20 | 0:20:57 | 0:14:38 | 0:06:19 | smithi | main | centos | 8.stream | rados/cephadm/osds/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-ops/rm-zap-flag} | 2 | |
Failure Reason:
"/var/log/ceph/4e7ae7e6-7b76-11ed-8441-001a4aab830c/ceph-mon.smithi079.log:2022-12-14T06:17:44.199+0000 7f9da72e2700 0 log_channel(cluster) log [WRN] : Health check failed: 1 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
pass | 7115088 | 2022-12-13 23:16:47 | 2022-12-14 05:59:23 | 2022-12-14 06:26:19 | 0:26:56 | 0:19:44 | 0:07:12 | smithi | main | rhel | 8.6 | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{rhel_8} tasks/rados_cls_all} | 2 | |
pass | 7115089 | 2022-12-13 23:16:49 | 2022-12-14 06:01:04 | 2022-12-14 06:35:01 | 0:33:57 | 0:26:08 | 0:07:49 | smithi | main | centos | 8.stream | rados/multimon/{clusters/21 mon_election/connectivity msgr-failures/many msgr/async no_pools objectstore/bluestore-bitmap rados supported-random-distro$/{centos_8} tasks/mon_recovery} | 3 | |
pass | 7115090 | 2022-12-13 23:16:50 | 2022-12-14 06:01:44 | 2022-12-14 06:20:42 | 0:18:58 | 0:13:12 | 0:05:46 | smithi | main | rhel | 8.6 | rados/singleton/{all/pg-removal-interruption mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-bitmap rados supported-random-distro$/{rhel_8}} | 1 | |
fail | 7115091 | 2022-12-13 23:16:51 | 2022-12-14 06:01:45 | 2022-12-14 06:36:07 | 0:34:22 | 0:25:37 | 0:08:45 | smithi | main | centos | 8.stream | rados/dashboard/{0-single-container-host debug/mgr mon_election/classic random-objectstore$/{bluestore-low-osd-mem-target} tasks/dashboard} | 2 | |
Failure Reason:
Test failure: test_full_health (tasks.mgr.dashboard.test_health.HealthTest) |
||||||||||||||
pass | 7115092 | 2022-12-13 23:16:52 | 2022-12-14 06:03:05 | 2022-12-14 06:24:06 | 0:21:01 | 0:13:39 | 0:07:22 | smithi | main | centos | 8.stream | rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/disable mon_election/classic random-objectstore$/{bluestore-comp-lz4} supported-random-distro$/{centos_8} tasks/crash} | 2 | |
pass | 7115093 | 2022-12-13 23:16:54 | 2022-12-14 06:03:06 | 2022-12-14 06:22:06 | 0:19:00 | 0:08:29 | 0:10:31 | smithi | main | ubuntu | 20.04 | rados/objectstore/{backends/alloc-hint supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 7115094 | 2022-12-13 23:16:55 | 2022-12-14 06:04:36 | 2022-12-14 06:26:20 | 0:21:44 | 0:11:37 | 0:10:07 | smithi | main | ubuntu | 20.04 | rados/rest/{mgr-restful supported-random-distro$/{ubuntu_latest}} | 1 | |
fail | 7115095 | 2022-12-13 23:16:56 | 2022-12-14 06:04:57 | 2022-12-14 06:22:35 | 0:17:38 | 0:07:42 | 0:09:56 | smithi | main | ubuntu | 20.04 | rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/radosbench cluster/1-node k8s/1.21 net/calico rook/1.7.2} | 1 | |
Failure Reason:
Command failed on smithi067 with status 1: 'sudo kubeadm init --node-name smithi067 --token abcdef.rht321d4bb4yx1kw --pod-network-cidr 10.250.16.0/21' |
||||||||||||||
pass | 7115096 | 2022-12-13 23:16:57 | 2022-12-14 06:04:57 | 2022-12-14 06:34:38 | 0:29:41 | 0:22:00 | 0:07:41 | smithi | main | rhel | 8.6 | rados/singleton-nomsgr/{all/admin_socket_output mon_election/classic rados supported-random-distro$/{rhel_8}} | 1 | |
pass | 7115097 | 2022-12-13 23:16:58 | 2022-12-14 06:05:08 | 2022-12-14 06:28:43 | 0:23:35 | 0:16:15 | 0:07:20 | smithi | main | centos | 8.stream | rados/standalone/{supported-random-distro$/{centos_8} workloads/c2c} | 1 | |
fail | 7115098 | 2022-12-13 23:17:00 | 2022-12-14 06:05:08 | 2022-12-14 06:51:56 | 0:46:48 | 0:36:11 | 0:10:37 | smithi | main | rhel | 8.6 | rados/upgrade/parallel/{0-random-distro$/{rhel_8.6_container_tools_3.0} 0-start 1-tasks mon_election/classic upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} | 2 | |
Failure Reason:
Command failed on smithi072 with status 128: 'rm -rf /home/ubuntu/cephtest/clone.client.0 && git clone --depth 1 --branch quincy https://github.com/chrisphoffman/ceph.git /home/ubuntu/cephtest/clone.client.0 && cd /home/ubuntu/cephtest/clone.client.0' |
||||||||||||||
pass | 7115099 | 2022-12-13 23:17:01 | 2022-12-14 06:08:19 | 2022-12-14 06:35:10 | 0:26:51 | 0:21:04 | 0:05:47 | smithi | main | centos | 8.stream | rados/valgrind-leaks/{1-start 2-inject-leak/mon centos_latest} | 1 | |
pass | 7115100 | 2022-12-13 23:17:02 | 2022-12-14 06:08:20 | 2022-12-14 06:35:36 | 0:27:16 | 0:20:10 | 0:07:06 | smithi | main | rhel | 8.6 | rados/cephadm/workunits/{0-distro/rhel_8.6_container_tools_rhel8 agent/off mon_election/classic task/test_cephadm} | 1 | |
pass | 7115101 | 2022-12-13 23:17:03 | 2022-12-14 06:08:30 | 2022-12-14 06:35:39 | 0:27:09 | 0:18:36 | 0:08:33 | smithi | main | centos | 8.stream | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-dispatch-delay msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{centos_8} thrashers/mapgap thrashosds-health workloads/redirect_set_object} | 2 | |
pass | 7115102 | 2022-12-13 23:17:05 | 2022-12-14 06:09:51 | 2022-12-14 06:47:19 | 0:37:28 | 0:31:09 | 0:06:19 | smithi | main | rhel | 8.6 | rados/monthrash/{ceph clusters/9-mons mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/filestore-xfs rados supported-random-distro$/{rhel_8} thrashers/force-sync-many workloads/rados_mon_workunits} | 2 | |
fail | 7115103 | 2022-12-13 23:17:06 | 2022-12-14 06:10:12 | 2022-12-14 06:43:49 | 0:33:37 | 0:26:53 | 0:06:44 | smithi | main | centos | 8.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v1only objectstore/filestore-xfs rados tasks/rados_api_tests validater/lockdep} | 2 | |
Failure Reason:
Command failed (workunit test rados/test.sh) on smithi033 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=570ede77572a3e5feb912523281e50b9e1e2539f TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 ALLOW_TIMEOUTS=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh' |
||||||||||||||
pass | 7115104 | 2022-12-13 23:17:07 | 2022-12-14 06:10:52 | 2022-12-14 06:34:40 | 0:23:48 | 0:11:06 | 0:12:42 | smithi | main | ubuntu | 20.04 | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/connectivity msgr-failures/osd-delay objectstore/filestore-xfs rados recovery-overrides/{more-active-recovery} supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 | |
pass | 7115105 | 2022-12-13 23:17:08 | 2022-12-14 06:12:03 | 2022-12-14 06:43:43 | 0:31:40 | 0:24:55 | 0:06:45 | smithi | main | centos | 8.stream | rados/singleton/{all/radostool mon_election/classic msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8}} | 1 | |
fail | 7115106 | 2022-12-13 23:17:10 | 2022-12-14 06:12:13 | 2022-12-14 06:36:18 | 0:24:05 | 0:16:10 | 0:07:55 | smithi | main | centos | 8.stream | rados/cephadm/smoke/{0-distro/centos_8.stream_container_tools 0-nvme-loop agent/off fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
"/var/log/ceph/30727cda-7b78-11ed-8441-001a4aab830c/ceph-mon.c.log:2022-12-14T06:33:07.279+0000 7f4898867700 7 mon.c@2(peon).log v284 update_from_paxos applying incremental log 284 2022-12-14T06:33:06.273125+0000 mon.a (mon.0) 813 : cluster [WRN] Health check failed: Reduced data availability: 4 pgs peering (PG_AVAILABILITY)" in cluster log |
||||||||||||||
pass | 7115107 | 2022-12-13 23:17:11 | 2022-12-14 06:13:04 | 2022-12-14 06:32:09 | 0:19:05 | 0:12:29 | 0:06:36 | smithi | main | centos | 8.stream | rados/singleton-nomsgr/{all/balancer mon_election/connectivity rados supported-random-distro$/{centos_8}} | 1 | |
pass | 7115108 | 2022-12-13 23:17:12 | 2022-12-14 06:13:04 | 2022-12-14 06:34:22 | 0:21:18 | 0:10:40 | 0:10:38 | smithi | main | ubuntu | 20.04 | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/fastclose objectstore/bluestore-bitmap rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=lrc-k=4-m=2-l=3} | 3 | |
pass | 7115109 | 2022-12-13 23:17:13 | 2022-12-14 06:14:55 | 2022-12-14 06:32:00 | 0:17:05 | 0:08:35 | 0:08:30 | smithi | main | ubuntu | 20.04 | rados/perf/{ceph mon_election/classic objectstore/bluestore-stupid openstack scheduler/wpq_default_shards settings/optimized ubuntu_latest workloads/radosbench_4K_rand_read} | 1 | |
pass | 7115110 | 2022-12-13 23:17:15 | 2022-12-14 06:14:55 | 2022-12-14 06:37:55 | 0:23:00 | 0:17:16 | 0:05:44 | smithi | main | centos | 8.stream | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{default} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_8} thrashers/morepggrow thrashosds-health workloads/set-chunks-read} | 2 | |
pass | 7115111 | 2022-12-13 23:17:16 | 2022-12-14 06:14:55 | 2022-12-14 06:51:39 | 0:36:44 | 0:25:37 | 0:11:07 | smithi | main | centos | 8.stream | rados/singleton/{all/random-eio mon_election/connectivity msgr-failures/none msgr/async-v2only objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8}} | 2 | |
fail | 7115112 | 2022-12-13 23:17:17 | 2022-12-14 06:18:46 | 2022-12-14 06:45:58 | 0:27:12 | 0:16:39 | 0:10:33 | smithi | main | ubuntu | 20.04 | rados/cephadm/smoke-singlehost/{0-random-distro$/{ubuntu_20.04} 1-start 2-services/basic 3-final} | 1 | |
Failure Reason:
"/var/log/ceph/0390e3cc-7b79-11ed-8441-001a4aab830c/ceph-mon.smithi133.log:2022-12-14T06:36:36.609+0000 7f8152e1e700 0 log_channel(cluster) log [WRN] : Health check failed: 1 slow ops, oldest one blocked for 30 sec, mon.smithi133 has slow ops (SLOW_OPS)" in cluster log |
||||||||||||||
pass | 7115113 | 2022-12-13 23:17:19 | 2022-12-14 06:18:47 | 2022-12-14 06:57:12 | 0:38:25 | 0:31:31 | 0:06:54 | smithi | main | centos | 8.stream | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-low-osd-mem-target rados recovery-overrides/{more-active-recovery} supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=jerasure-k=2-m=1} | 2 | |
pass | 7115114 | 2022-12-13 23:17:20 | 2022-12-14 06:18:47 | 2022-12-14 06:51:06 | 0:32:19 | 0:20:53 | 0:11:26 | smithi | main | ubuntu | 20.04 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/osd-delay rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/morepggrow thrashosds-health workloads/ec-small-objects-fast-read-overwrites} | 2 | |
pass | 7115115 | 2022-12-13 23:17:21 | 2022-12-14 06:19:28 | 2022-12-14 06:38:24 | 0:18:56 | 0:10:41 | 0:08:15 | smithi | main | ubuntu | 20.04 | rados/singleton-nomsgr/{all/cache-fs-trunc mon_election/classic rados supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 7115116 | 2022-12-13 23:17:22 | 2022-12-14 06:19:28 | 2022-12-14 06:51:06 | 0:31:38 | 0:24:11 | 0:07:27 | smithi | main | rhel | 8.6 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{rhel_8} thrashers/none thrashosds-health workloads/small-objects-balanced} | 2 | |
pass | 7115117 | 2022-12-13 23:17:23 | 2022-12-14 06:20:29 | 2022-12-14 06:43:48 | 0:23:19 | 0:14:44 | 0:08:35 | smithi | main | ubuntu | 20.04 | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{ubuntu_latest} tasks/rados_python} | 2 | |
fail | 7115118 | 2022-12-13 23:17:25 | 2022-12-14 06:20:29 | 2022-12-14 06:41:58 | 0:21:29 | 0:13:59 | 0:07:30 | smithi | main | centos | 8.stream | rados/cephadm/osds/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop 1-start 2-ops/rm-zap-wait} | 2 | |
Failure Reason:
"/var/log/ceph/4957eefa-7b79-11ed-8441-001a4aab830c/ceph-mon.smithi137.log:2022-12-14T06:38:57.152+0000 7fc0878d4700 0 log_channel(cluster) log [WRN] : Health check failed: 1 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
pass | 7115119 | 2022-12-13 23:17:26 | 2022-12-14 06:20:49 | 2022-12-14 06:55:26 | 0:34:37 | 0:27:05 | 0:07:32 | smithi | main | rhel | 8.6 | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/fastclose objectstore/bluestore-bitmap rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{rhel_8} thrashers/none thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 | |
pass | 7115120 | 2022-12-13 23:17:27 | 2022-12-14 06:22:10 | 2022-12-14 06:46:19 | 0:24:09 | 0:14:16 | 0:09:53 | smithi | main | ubuntu | 20.04 | rados/singleton/{all/rebuild-mondb mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-zlib rados supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 7115121 | 2022-12-13 23:17:28 | 2022-12-14 06:22:20 | 2022-12-14 06:41:24 | 0:19:04 | 0:12:07 | 0:06:57 | smithi | main | centos | 8.stream | rados/singleton-nomsgr/{all/ceph-kvstore-tool mon_election/connectivity rados supported-random-distro$/{centos_8}} | 1 | |
pass | 7115122 | 2022-12-13 23:17:29 | 2022-12-14 06:22:21 | 2022-12-14 06:54:42 | 0:32:21 | 0:24:22 | 0:07:59 | smithi | main | rhel | 8.6 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{default} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async objectstore/filestore-xfs rados supported-random-distro$/{rhel_8} thrashers/pggrow thrashosds-health workloads/small-objects-localized} | 2 | |
pass | 7115123 | 2022-12-13 23:17:31 | 2022-12-14 06:23:41 | 2022-12-14 06:46:43 | 0:23:02 | 0:16:23 | 0:06:39 | smithi | main | centos | 8.stream | rados/objectstore/{backends/ceph_objectstore_tool supported-random-distro$/{centos_8}} | 1 | |
pass | 7115124 | 2022-12-13 23:17:32 | 2022-12-14 06:23:42 | 2022-12-14 06:39:29 | 0:15:47 | 0:06:02 | 0:09:45 | smithi | main | ubuntu | 20.04 | rados/cephadm/workunits/{0-distro/ubuntu_20.04 agent/on mon_election/connectivity task/test_cephadm_repos} | 1 | |
pass | 7115125 | 2022-12-13 23:17:33 | 2022-12-14 06:24:12 | 2022-12-14 06:42:45 | 0:18:33 | 0:07:56 | 0:10:37 | smithi | main | ubuntu | 20.04 | rados/multimon/{clusters/3 mon_election/classic msgr-failures/few msgr/async-v1only no_pools objectstore/bluestore-comp-lz4 rados supported-random-distro$/{ubuntu_latest} tasks/mon_clock_no_skews} | 2 | |
pass | 7115126 | 2022-12-13 23:17:34 | 2022-12-14 06:24:43 | 2022-12-14 07:22:30 | 0:57:47 | 0:47:45 | 0:10:02 | smithi | main | ubuntu | 20.04 | rados/singleton/{all/recovery-preemption mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-zstd rados supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 7115127 | 2022-12-13 23:17:36 | 2022-12-14 06:24:43 | 2022-12-14 06:42:05 | 0:17:22 | 0:08:21 | 0:09:01 | smithi | main | ubuntu | 20.04 | rados/perf/{ceph mon_election/connectivity objectstore/bluestore-basic-min-osd-mem-target openstack scheduler/dmclock_1Shard_16Threads settings/optimized ubuntu_latest workloads/radosbench_4K_seq_read} | 1 | |
pass | 7115128 | 2022-12-13 23:17:37 | 2022-12-14 06:24:43 | 2022-12-14 06:48:32 | 0:23:49 | 0:16:11 | 0:07:38 | smithi | main | centos | 8.stream | rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/enable mon_election/connectivity random-objectstore$/{bluestore-comp-lz4} supported-random-distro$/{centos_8} tasks/failover} | 2 | |
pass | 7115129 | 2022-12-13 23:17:38 | 2022-12-14 06:25:34 | 2022-12-14 07:05:30 | 0:39:56 | 0:33:28 | 0:06:28 | smithi | main | centos | 8.stream | rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/mon-delay msgr/async objectstore/bluestore-bitmap rados supported-random-distro$/{centos_8} thrashers/many workloads/rados_mon_workunits} | 2 | |
fail | 7115130 | 2022-12-13 23:17:39 | 2022-12-14 06:26:25 | 2022-12-14 06:45:59 | 0:19:34 | 0:09:30 | 0:10:04 | smithi | main | ubuntu | 20.04 | rados/singleton-nomsgr/{all/ceph-post-file mon_election/classic rados supported-random-distro$/{ubuntu_latest}} | 1 | |
Failure Reason:
Command failed (workunit test post-file.sh) on smithi155 with status 255: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=570ede77572a3e5feb912523281e50b9e1e2539f TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/post-file.sh' |
||||||||||||||
pass | 7115131 | 2022-12-13 23:17:40 | 2022-12-14 06:26:25 | 2022-12-14 06:55:42 | 0:29:17 | 0:19:47 | 0:09:30 | smithi | main | ubuntu | 20.04 | rados/standalone/{supported-random-distro$/{ubuntu_latest} workloads/crush} | 1 | |
dead | 7115132 | 2022-12-13 23:17:42 | 2022-12-14 06:26:25 | 2022-12-14 06:28:09 | 0:01:44 | smithi | main | rhel | 8.6 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-dispatch-delay msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/small-objects} | 2 | |||
Failure Reason:
Error reimaging machines: Failed to power on smithi026 |
||||||||||||||
fail | 7115133 | 2022-12-13 23:17:43 | 2022-12-14 07:09:18 | 2089 | smithi | main | centos | 8.stream | rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat mon_election/classic msgr-failures/osd-delay rados thrashers/careful thrashosds-health workloads/cache-snaps} | 3 | ||||
Failure Reason:
"/var/log/ceph/b5cd4bf6-7b7a-11ed-8441-001a4aab830c/ceph-mon.a.log:2022-12-14T06:59:59.999+0000 7fa0da191700 0 log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN nodeep-scrub flag(s) set" in cluster log |
||||||||||||||
fail | 7115134 | 2022-12-13 23:17:44 | 2022-12-14 06:27:27 | 2022-12-14 07:31:27 | 1:04:00 | 0:56:40 | 0:07:20 | smithi | main | centos | 8.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-bitmap rados tasks/rados_api_tests validater/valgrind} | 2 | |
Failure Reason:
Command failed (workunit test rados/test.sh) on smithi026 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=570ede77572a3e5feb912523281e50b9e1e2539f TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 ALLOW_TIMEOUTS=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh' |
||||||||||||||
fail | 7115135 | 2022-12-13 23:17:45 | 2022-12-14 06:28:17 | 2022-12-14 06:52:20 | 0:24:03 | 0:16:32 | 0:07:31 | smithi | main | centos | 8.stream | rados/cephadm/smoke/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} | 2 | |
Failure Reason:
"/var/log/ceph/71ae272e-7b7a-11ed-8441-001a4aab830c/ceph-mon.c.log:2022-12-14T06:45:25.588+0000 7fb50801b700 7 mon.c@2(peon).log v106 update_from_paxos applying incremental log 106 2022-12-14T06:45:25.255535+0000 mon.a (mon.0) 402 : cluster [WRN] Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log |
||||||||||||||
pass | 7115136 | 2022-12-13 23:17:47 | 2022-12-14 06:28:48 | 2022-12-14 06:52:20 | 0:23:32 | 0:12:00 | 0:11:32 | smithi | main | ubuntu | 20.04 | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/osd-dispatch-delay objectstore/bluestore-bitmap rados recovery-overrides/{more-active-recovery} supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 | |
pass | 7115137 | 2022-12-13 23:17:48 | 2022-12-14 06:30:08 | 2022-12-14 06:49:45 | 0:19:37 | 0:09:10 | 0:10:27 | smithi | main | ubuntu | 20.04 | rados/singleton/{all/resolve_stuck_peering mon_election/classic msgr-failures/none msgr/async-v2only objectstore/bluestore-hybrid rados supported-random-distro$/{ubuntu_latest}} | 2 | |
pass | 7115138 | 2022-12-13 23:17:50 | 2022-12-14 06:30:19 | 2022-12-14 07:10:07 | 0:39:48 | 0:30:42 | 0:09:06 | smithi | main | rhel | 8.6 | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/few objectstore/bluestore-comp-lz4 rados recovery-overrides/{more-active-recovery} supported-random-distro$/{rhel_8} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} | 3 | |
pass | 7115139 | 2022-12-13 23:17:51 | 2022-12-14 06:33:55 | 2022-12-14 06:57:21 | 0:23:26 | 0:16:06 | 0:07:20 | smithi | main | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools agent/off mon_election/classic task/test_iscsi_pids_limit/{centos_8.stream_container_tools test_iscsi_pids_limit}} | 1 | |
pass | 7115140 | 2022-12-13 23:17:52 | 2022-12-14 06:34:25 | 2022-12-14 07:11:25 | 0:37:00 | 0:31:18 | 0:05:42 | smithi | main | centos | 8.stream | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/snaps-few-objects-balanced} | 2 | |
pass | 7115141 | 2022-12-13 23:17:53 | 2022-12-14 06:34:26 | 2022-12-14 06:53:51 | 0:19:25 | 0:13:45 | 0:05:40 | smithi | main | rhel | 8.6 | rados/singleton-nomsgr/{all/crushdiff mon_election/connectivity rados supported-random-distro$/{rhel_8}} | 1 | |
pass | 7115142 | 2022-12-13 23:17:55 | 2022-12-14 06:34:46 | 2022-12-14 06:59:47 | 0:25:01 | 0:17:53 | 0:07:08 | smithi | main | centos | 8.stream | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_8} tasks/rados_stress_watch} | 2 | |
fail | 7115143 | 2022-12-13 23:17:56 | 2022-12-14 06:34:47 | 2022-12-14 07:03:04 | 0:28:17 | 0:20:26 | 0:07:51 | smithi | main | centos | 8.stream | rados/singleton/{all/test-crash mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_8}} | 1 | |
Failure Reason:
Command failed (workunit test rados/test_crash.sh) on smithi049 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=570ede77572a3e5feb912523281e50b9e1e2539f TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_crash.sh' |
||||||||||||||
pass | 7115144 | 2022-12-13 23:17:57 | 2022-12-14 06:34:47 | 2022-12-14 07:12:17 | 0:37:30 | 0:31:24 | 0:06:06 | smithi | main | centos | 8.stream | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/fastclose objectstore/bluestore-stupid rados recovery-overrides/{default} supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/ec-rados-plugin=jerasure-k=3-m=1} | 2 | |
dead | 7115145 | 2022-12-13 23:17:58 | 2022-12-14 06:35:08 | 2022-12-14 06:54:41 | 0:19:33 | smithi | main | rhel | 8.6 | rados/cephadm/osds/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop 1-start 2-ops/rmdir-reactivate} | 2 | |||
Failure Reason:
Error reimaging machines: reached maximum tries (100) after waiting for 600 seconds |
||||||||||||||
pass | 7115146 | 2022-12-13 23:18:00 | 2022-12-14 06:35:08 | 2022-12-14 07:15:06 | 0:39:58 | 0:29:59 | 0:09:59 | smithi | main | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest} thrashers/mapgap thrashosds-health workloads/snaps-few-objects-localized} | 2 | |
pass | 7115147 | 2022-12-13 23:18:01 | 2022-12-14 06:35:39 | 2022-12-14 06:53:37 | 0:17:58 | 0:11:11 | 0:06:47 | smithi | main | centos | 8.stream | rados/singleton-nomsgr/{all/export-after-evict mon_election/classic rados supported-random-distro$/{centos_8}} | 1 | |
pass | 7115148 | 2022-12-13 23:18:02 | 2022-12-14 06:35:49 | 2022-12-14 06:53:08 | 0:17:19 | 0:08:43 | 0:08:36 | smithi | main | ubuntu | 20.04 | rados/perf/{ceph mon_election/classic objectstore/bluestore-bitmap openstack scheduler/dmclock_default_shards settings/optimized ubuntu_latest workloads/radosbench_4M_rand_read} | 1 | |
pass | 7115149 | 2022-12-13 23:18:03 | 2022-12-14 06:35:50 | 2022-12-14 07:09:21 | 0:33:31 | 0:24:39 | 0:08:52 | smithi | main | ubuntu | 20.04 | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few objectstore/bluestore-comp-lz4 rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 | |
pass | 7115150 | 2022-12-13 23:18:05 | 2022-12-14 06:36:10 | 2022-12-14 06:55:49 | 0:19:39 | 0:09:03 | 0:10:36 | smithi | main | ubuntu | 20.04 | rados/singleton/{all/test-noautoscale-flag mon_election/classic msgr-failures/many msgr/async-v1only objectstore/bluestore-stupid rados supported-random-distro$/{ubuntu_latest}} | 1 | |
fail | 7115151 | 2022-12-13 23:18:06 | 2022-12-14 06:36:20 | 2022-12-14 07:01:57 | 0:25:37 | 0:18:43 | 0:06:54 | smithi | main | rhel | 8.6 | rados/cephadm/smoke/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop agent/off fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
"/var/log/ceph/baa394ea-7b7b-11ed-8441-001a4aab830c/ceph-mon.c.log:2022-12-14T06:53:53.626+0000 7f3ec5a44700 7 mon.c@2(synchronizing).log v65 update_from_paxos applying incremental log 64 2022-12-14T06:53:51.640370+0000 mon.a (mon.0) 176 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,b (MON_DOWN)" in cluster log |
||||||||||||||
pass | 7115152 | 2022-12-13 23:18:07 | 2022-12-14 06:36:51 | 2022-12-14 06:54:11 | 0:17:20 | 0:08:20 | 0:09:00 | smithi | main | ubuntu | 20.04 | rados/objectstore/{backends/filejournal supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 7115153 | 2022-12-13 23:18:08 | 2022-12-14 06:36:51 | 2022-12-14 07:14:59 | 0:38:08 | 0:29:39 | 0:08:29 | smithi | main | centos | 8.stream | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{default} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_8} thrashers/morepggrow thrashosds-health workloads/snaps-few-objects} | 2 | |
pass | 7115154 | 2022-12-13 23:18:10 | 2022-12-14 06:38:02 | 2022-12-14 06:57:20 | 0:19:18 | 0:13:04 | 0:06:14 | smithi | main | rhel | 8.6 | rados/multimon/{clusters/6 mon_election/connectivity msgr-failures/many msgr/async-v2only no_pools objectstore/bluestore-comp-snappy rados supported-random-distro$/{rhel_8} tasks/mon_clock_with_skews} | 2 | |
pass | 7115155 | 2022-12-13 23:18:11 | 2022-12-14 06:38:33 | 2022-12-14 06:58:52 | 0:20:19 | 0:08:56 | 0:11:23 | smithi | main | ubuntu | 20.04 | rados/singleton-nomsgr/{all/full-tiering mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 7115156 | 2022-12-13 23:18:12 | 2022-12-14 06:39:33 | 2022-12-14 07:11:20 | 0:31:47 | 0:20:53 | 0:10:54 | smithi | main | ubuntu | 20.04 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/osd-dispatch-delay rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/ec-small-objects-overwrites} | 2 | |
fail | 7115157 | 2022-12-13 23:18:13 | 2022-12-14 06:39:53 | 2022-12-14 07:03:04 | 0:23:11 | 0:17:00 | 0:06:11 | smithi | main | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools_crun agent/on mon_election/connectivity task/test_nfs} | 1 | |
Failure Reason:
Test failure: test_cluster_set_reset_user_config (tasks.cephfs.test_nfs.TestNFS) |
||||||||||||||
pass | 7115158 | 2022-12-13 23:18:15 | 2022-12-14 06:40:04 | 2022-12-14 07:04:30 | 0:24:26 | 0:19:23 | 0:05:03 | smithi | main | centos | 8.stream | rados/singleton/{all/test_envlibrados_for_rocksdb mon_election/connectivity msgr-failures/none msgr/async-v2only objectstore/filestore-xfs rados supported-random-distro$/{centos_8}} | 1 | |
pass | 7115159 | 2022-12-13 23:18:16 | 2022-12-14 06:40:04 | 2022-12-14 07:08:38 | 0:28:34 | 0:20:05 | 0:08:29 | smithi | main | centos | 8.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-lz4 rados tasks/rados_cls_all validater/lockdep} | 2 | |
pass | 7115160 | 2022-12-13 23:18:17 | 2022-12-14 06:41:45 | 2022-12-14 07:27:03 | 0:45:18 | 0:38:40 | 0:06:38 | smithi | main | rhel | 8.6 | rados/monthrash/{ceph clusters/9-mons mon_election/connectivity msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8} thrashers/one workloads/snaps-few-objects} | 2 | |
pass | 7115161 | 2022-12-13 23:18:19 | 2022-12-14 06:42:05 | 2022-12-14 07:07:10 | 0:25:05 | 0:13:04 | 0:12:01 | smithi | main | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-dispatch-delay msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{ubuntu_latest} thrashers/none thrashosds-health workloads/write_fadvise_dontneed} | 2 | |
pass | 7115162 | 2022-12-13 23:18:20 | 2022-12-14 06:42:56 | 2022-12-14 07:08:53 | 0:25:57 | 0:18:20 | 0:07:37 | smithi | main | rhel | 8.6 | rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/disable mon_election/classic random-objectstore$/{bluestore-low-osd-mem-target} supported-random-distro$/{rhel_8} tasks/insights} | 2 | |
pass | 7115163 | 2022-12-13 23:18:21 | 2022-12-14 06:43:47 | 2022-12-14 07:05:04 | 0:21:17 | 0:15:12 | 0:06:05 | smithi | main | centos | 8.stream | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/connectivity msgr-failures/fastclose objectstore/bluestore-comp-lz4 rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 | |
pass | 7115164 | 2022-12-13 23:18:22 | 2022-12-14 06:43:57 | 2022-12-14 07:03:15 | 0:19:18 | 0:10:55 | 0:08:23 | smithi | main | centos | 8.stream | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{centos_8} tasks/rados_striper} | 2 | |
pass | 7115165 | 2022-12-13 23:18:24 | 2022-12-14 06:46:08 | 2022-12-14 07:30:09 | 0:44:01 | 0:36:44 | 0:07:17 | smithi | main | rhel | 8.6 | rados/singleton-bluestore/{all/cephtool mon_election/classic msgr-failures/none msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{rhel_8}} | 1 | |
pass | 7115166 | 2022-12-13 23:18:25 | 2022-12-14 06:46:29 | 2022-12-14 07:10:39 | 0:24:10 | 0:16:10 | 0:08:00 | smithi | main | rhel | 8.6 | rados/cephadm/osds/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop 1-start 2-ops/repave-all} | 2 | |
pass | 7115167 | 2022-12-13 23:18:26 | 2022-12-14 06:47:29 | 2022-12-14 07:07:11 | 0:19:42 | 0:09:21 | 0:10:21 | smithi | main | ubuntu | 20.04 | rados/singleton-nomsgr/{all/health-warnings mon_election/classic rados supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 7115168 | 2022-12-13 23:18:27 | 2022-12-14 06:47:30 | 2022-12-14 08:27:27 | 1:39:57 | 1:28:04 | 0:11:53 | smithi | main | ubuntu | 20.04 | rados/standalone/{supported-random-distro$/{ubuntu_latest} workloads/erasure-code} | 1 | |
pass | 7115169 | 2022-12-13 23:18:29 | 2022-12-14 06:48:40 | 2022-12-14 08:11:21 | 1:22:41 | 1:11:09 | 0:11:32 | smithi | main | ubuntu | 20.04 | rados/singleton/{all/thrash-backfill-full mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_latest}} | 2 | |
pass | 7115170 | 2022-12-13 23:18:30 | 2022-12-14 06:49:51 | 2022-12-14 07:12:40 | 0:22:49 | 0:11:35 | 0:11:14 | smithi | main | ubuntu | 20.04 | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/osd-delay objectstore/bluestore-comp-snappy rados recovery-overrides/{more-async-recovery} supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/ec-rados-plugin=lrc-k=4-m=2-l=3} | 3 | |
pass | 7115171 | 2022-12-13 23:18:31 | 2022-12-14 06:51:12 | 2022-12-14 07:26:28 | 0:35:16 | 0:25:49 | 0:09:27 | smithi | main | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/admin_socket_objecter_requests} | 2 | |
pass | 7115172 | 2022-12-13 23:18:32 | 2022-12-14 06:51:12 | 2022-12-14 07:09:03 | 0:17:51 | 0:08:42 | 0:09:09 | smithi | main | ubuntu | 20.04 | rados/perf/{ceph mon_election/connectivity objectstore/bluestore-comp openstack scheduler/wpq_default_shards settings/optimized ubuntu_latest workloads/radosbench_4M_seq_read} | 1 | |
dead | 7115173 | 2022-12-13 23:18:34 | 2022-12-14 06:51:42 | 2022-12-14 07:11:40 | 0:19:58 | smithi | main | rhel | 8.6 | rados/cephadm/smoke/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} | 2 | |||
Failure Reason:
Error reimaging machines: reached maximum tries (100) after waiting for 600 seconds |
||||||||||||||
fail | 7115174 | 2022-12-13 23:18:35 | 2022-12-14 06:52:03 | 2022-12-14 07:23:51 | 0:31:48 | 0:23:45 | 0:08:03 | smithi | main | centos | 8.stream | rados/dashboard/{0-single-container-host debug/mgr mon_election/connectivity random-objectstore$/{bluestore-low-osd-mem-target} tasks/e2e} | 2 | |
Failure Reason:
"/var/log/ceph/25d75d6c-7b7e-11ed-8441-001a4aab830c/ceph-mon.a.log:2022-12-14T07:18:33.300+0000 7fe44feac700 0 log_channel(cluster) log [WRN] : Health check failed: 1 host is in maintenance mode (HOST_IN_MAINTENANCE)" in cluster log |
||||||||||||||
fail | 7115175 | 2022-12-13 23:18:36 | 2022-12-14 06:52:23 | 2022-12-14 07:12:30 | 0:20:07 | 0:08:36 | 0:11:31 | smithi | main | ubuntu | 20.04 | rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/none cluster/3-node k8s/1.21 net/flannel rook/master} | 3 | |
Failure Reason:
Command failed on smithi005 with status 1: 'sudo kubeadm init --node-name smithi005 --token abcdef.331oooezqrkma3yy --pod-network-cidr 10.248.32.0/21' |
||||||||||||||
pass | 7115176 | 2022-12-13 23:18:37 | 2022-12-14 06:52:24 | 2022-12-14 07:12:45 | 0:20:21 | 0:13:48 | 0:06:33 | smithi | main | rhel | 8.6 | rados/singleton-nomsgr/{all/large-omap-object-warnings mon_election/connectivity rados supported-random-distro$/{rhel_8}} | 1 | |
pass | 7115177 | 2022-12-13 23:18:39 | 2022-12-14 06:52:24 | 2022-12-14 08:02:43 | 1:10:19 | 0:59:41 | 0:10:38 | smithi | main | ubuntu | 20.04 | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/few objectstore/filestore-xfs rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/fastread thrashosds-health workloads/ec-radosbench} | 2 | |
dead | 7115178 | 2022-12-13 23:18:40 | 2022-12-14 06:53:15 | 2022-12-14 21:38:57 | 14:45:42 | smithi | main | centos | 8.stream | rados/singleton/{all/thrash-eio mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8}} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 7115179 | 2022-12-13 23:18:41 | 2022-12-14 06:53:55 | 2022-12-14 08:31:53 | 1:37:58 | 1:31:17 | 0:06:41 | smithi | main | centos | 8.stream | rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/3-size-2-min-size 1-install/octopus backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/on mon_election/connectivity msgr-failures/fastclose rados thrashers/default thrashosds-health workloads/radosbench} | 3 | |
Failure Reason:
"/var/log/ceph/6523e71a-7b7e-11ed-8441-001a4aab830c/ceph-mon.a.log:2022-12-14T07:19:59.998+0000 7f5461209700 0 log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN noscrub flag(s) set" in cluster log |
||||||||||||||
fail | 7115180 | 2022-12-13 23:18:43 | 2022-12-14 06:55:44 | 2022-12-14 07:20:58 | 0:25:14 | 0:18:20 | 0:06:54 | smithi | main | rhel | 8.6 | rados/cephadm/workunits/{0-distro/rhel_8.6_container_tools_3.0 agent/off mon_election/classic task/test_orch_cli} | 1 | |
Failure Reason:
"/var/log/ceph/ce6a888c-7b7e-11ed-8441-001a4aab830c/ceph-mon.a.log:2022-12-14T07:18:10.604+0000 7fa70aad0700 0 log_channel(cluster) log [WRN] : Health check failed: 1 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
pass | 7115181 | 2022-12-13 23:18:44 | 2022-12-14 06:55:44 | 2022-12-14 07:27:30 | 0:31:46 | 0:20:35 | 0:11:11 | smithi | main | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/cache-agent-big} | 2 | |
pass | 7115182 | 2022-12-13 23:18:45 | 2022-12-14 06:55:44 | 2022-12-14 07:33:32 | 0:37:48 | 0:29:49 | 0:07:59 | smithi | main | rhel | 8.6 | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/osd-delay objectstore/bluestore-comp-snappy rados recovery-overrides/{default} supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 | |
pass | 7115183 | 2022-12-13 23:18:46 | 2022-12-14 06:56:55 | 2022-12-14 09:30:19 | 2:33:24 | 2:26:17 | 0:07:07 | smithi | main | rhel | 8.6 | rados/objectstore/{backends/filestore-idempotent-aio-journal supported-random-distro$/{rhel_8}} | 1 | |
pass | 7115184 | 2022-12-13 23:18:48 | 2022-12-14 06:56:55 | 2022-12-14 07:26:13 | 0:29:18 | 0:20:10 | 0:09:08 | smithi | main | ubuntu | 20.04 | rados/singleton/{all/thrash-rados/{thrash-rados thrashosds-health} mon_election/classic msgr-failures/none msgr/async-v2only objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest}} | 2 | |
pass | 7115185 | 2022-12-13 23:18:49 | 2022-12-14 06:57:16 | 2022-12-14 07:18:16 | 0:21:00 | 0:14:37 | 0:06:23 | smithi | main | rhel | 8.6 | rados/singleton-nomsgr/{all/lazy_omap_stats_output mon_election/classic rados supported-random-distro$/{rhel_8}} | 1 | |
pass | 7115186 | 2022-12-13 23:18:51 | 2022-12-14 06:57:26 | 2022-12-14 07:20:17 | 0:22:51 | 0:16:09 | 0:06:42 | smithi | main | rhel | 8.6 | rados/cephadm/smoke-singlehost/{0-random-distro$/{rhel_8.6_container_tools_rhel8} 1-start 2-services/rgw 3-final} | 1 | |
pass | 7115187 | 2022-12-13 23:18:52 | 2022-12-14 06:57:26 | 2022-12-14 07:24:07 | 0:26:41 | 0:14:23 | 0:12:18 | smithi | main | ubuntu | 20.04 | rados/multimon/{clusters/9 mon_election/classic msgr-failures/few msgr/async no_pools objectstore/bluestore-comp-zlib rados supported-random-distro$/{ubuntu_latest} tasks/mon_recovery} | 3 | |
pass | 7115188 | 2022-12-13 23:18:53 | 2022-12-14 06:57:57 | 2022-12-14 07:22:58 | 0:25:01 | 0:17:10 | 0:07:51 | smithi | main | rhel | 8.6 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{rhel_8} thrashers/default thrashosds-health workloads/cache-agent-small} | 2 | |
pass | 7115189 | 2022-12-13 23:18:54 | 2022-12-14 06:59:57 | 2022-12-14 07:40:58 | 0:41:01 | 0:31:32 | 0:09:29 | smithi | main | centos | 8.stream | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async objectstore/filestore-xfs rados supported-random-distro$/{centos_8} tasks/rados_workunit_loadgen_big} | 2 | |
pass | 7115190 | 2022-12-13 23:18:56 | 2022-12-14 07:01:59 | 2022-12-14 07:35:11 | 0:33:12 | 0:23:30 | 0:09:42 | smithi | main | centos | 8.stream | rados/singleton/{all/thrash_cache_writeback_proxy_none mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_8}} | 2 | |
fail | 7115191 | 2022-12-13 23:18:57 | 2022-12-14 07:03:09 | 2022-12-14 07:24:26 | 0:21:17 | 0:11:27 | 0:09:50 | smithi | main | ubuntu | 20.04 | rados/singleton-nomsgr/{all/librados_hello_world mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} | 1 | |
Failure Reason:
Command failed (workunit test rados/test_librados_build.sh) on smithi143 with status 2: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=570ede77572a3e5feb912523281e50b9e1e2539f TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_librados_build.sh' |
||||||||||||||
fail | 7115192 | 2022-12-13 23:18:58 | 2022-12-14 07:03:10 | 2022-12-14 07:26:36 | 0:23:26 | 0:15:46 | 0:07:40 | smithi | main | rhel | 8.6 | rados/cephadm/osds/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-ops/rm-zap-add} | 2 | |
Failure Reason:
"/var/log/ceph/6d9d7b30-7b7f-11ed-8441-001a4aab830c/ceph-mon.smithi133.log:2022-12-14T07:22:53.942+0000 7f4a27683700 0 log_channel(cluster) log [WRN] : Health check failed: 1 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
pass | 7115193 | 2022-12-13 23:19:00 | 2022-12-14 07:03:20 | 2022-12-14 07:44:14 | 0:40:54 | 0:31:38 | 0:09:16 | smithi | main | centos | 8.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-snappy rados tasks/mon_recovery validater/valgrind} | 2 | |
pass | 7115194 | 2022-12-13 23:19:01 | 2022-12-14 07:05:11 | 2022-12-14 07:22:42 | 0:17:31 | 0:08:23 | 0:09:08 | smithi | main | ubuntu | 20.04 | rados/perf/{ceph mon_election/classic objectstore/bluestore-low-osd-mem-target openstack scheduler/dmclock_1Shard_16Threads settings/optimized ubuntu_latest workloads/radosbench_4M_write} | 1 | |
pass | 7115195 | 2022-12-13 23:19:02 | 2022-12-14 07:05:12 | 2022-12-14 07:29:50 | 0:24:38 | 0:18:25 | 0:06:13 | smithi | main | centos | 8.stream | rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/mon-delay msgr/async-v2only objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8} thrashers/sync-many workloads/pool-create-delete} | 2 | |
pass | 7115196 | 2022-12-13 23:19:03 | 2022-12-14 07:05:13 | 2022-12-14 07:29:23 | 0:24:10 | 0:11:58 | 0:12:12 | smithi | main | ubuntu | 20.04 | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/few objectstore/bluestore-comp-snappy rados recovery-overrides/{more-async-recovery} supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 | |
pass | 7115197 | 2022-12-13 23:19:05 | 2022-12-14 07:07:14 | 2022-12-14 07:41:45 | 0:34:31 | 0:26:58 | 0:07:33 | smithi | main | centos | 8.stream | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-dispatch-delay msgr/async objectstore/filestore-xfs rados supported-random-distro$/{centos_8} thrashers/mapgap thrashosds-health workloads/cache-pool-snaps-readproxy} | 2 | |
pass | 7115198 | 2022-12-13 23:19:06 | 2022-12-14 07:08:44 | 2022-12-14 07:46:02 | 0:37:18 | 0:31:18 | 0:06:00 | smithi | main | rhel | 8.6 | rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/enable mon_election/connectivity random-objectstore$/{bluestore-comp-snappy} supported-random-distro$/{rhel_8} tasks/module_selftest} | 2 | |
pass | 7115199 | 2022-12-13 23:19:07 | 2022-12-14 07:08:55 | 2022-12-14 07:27:50 | 0:18:55 | 0:11:39 | 0:07:16 | smithi | main | centos | 8.stream | rados/singleton/{all/watch-notify-same-primary mon_election/classic msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8}} | 1 | |
fail | 7115200 | 2022-12-13 23:19:09 | 2022-12-14 07:08:55 | 2022-12-14 07:47:38 | 0:38:43 | 0:29:48 | 0:08:55 | smithi | main | rhel | 8.6 | rados/cephadm/workunits/{0-distro/rhel_8.6_container_tools_rhel8 agent/on mon_election/connectivity task/test_orch_cli_mon} | 5 | |
Failure Reason:
"/var/log/ceph/4b379376-7b81-11ed-8441-001a4aab830c/ceph-mon.a.log:2022-12-14T07:34:55.739+0000 7f976e99b700 0 log_channel(cluster) log [WRN] : Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log |
||||||||||||||
pass | 7115201 | 2022-12-13 23:19:10 | 2022-12-14 07:09:26 | 2022-12-14 07:51:23 | 0:41:57 | 0:33:56 | 0:08:01 | smithi | main | centos | 8.stream | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-comp-zlib rados recovery-overrides/{more-async-recovery} supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} | 3 | |
pass | 7115202 | 2022-12-13 23:19:11 | 2022-12-14 07:10:16 | 2022-12-14 07:35:15 | 0:24:59 | 0:17:55 | 0:07:04 | smithi | main | centos | 8.stream | rados/singleton-nomsgr/{all/msgr mon_election/classic rados supported-random-distro$/{centos_8}} | 1 | |
pass | 7115203 | 2022-12-13 23:19:12 | 2022-12-14 07:10:17 | 2022-12-14 07:36:17 | 0:26:00 | 0:14:59 | 0:11:01 | smithi | main | ubuntu | 20.04 | rados/standalone/{supported-random-distro$/{ubuntu_latest} workloads/mgr} | 1 | |
pass | 7115204 | 2022-12-13 23:19:14 | 2022-12-14 07:10:47 | 2022-12-14 07:39:29 | 0:28:42 | 0:22:49 | 0:05:53 | smithi | main | centos | 8.stream | rados/valgrind-leaks/{1-start 2-inject-leak/none centos_latest} | 1 | |
pass | 7115205 | 2022-12-13 23:19:15 | 2022-12-14 07:10:47 | 2022-12-14 08:00:30 | 0:49:43 | 0:42:04 | 0:07:39 | smithi | main | rhel | 8.6 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/osd-dispatch-delay rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/ec-pool-snaps-few-objects-overwrites} | 2 | |
pass | 7115206 | 2022-12-13 23:19:16 | 2022-12-14 07:11:28 | 2022-12-14 07:46:35 | 0:35:07 | 0:28:30 | 0:06:37 | smithi | main | rhel | 8.6 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{rhel_8} thrashers/morepggrow thrashosds-health workloads/cache-pool-snaps} | 2 | |
fail | 7115207 | 2022-12-13 23:19:17 | 2022-12-14 07:11:28 | 2022-12-14 07:46:15 | 0:34:47 | 0:23:57 | 0:10:50 | smithi | main | ubuntu | 20.04 | rados/cephadm/smoke/{0-distro/ubuntu_20.04 0-nvme-loop agent/off fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
"/var/log/ceph/ad67aea6-7b80-11ed-8441-001a4aab830c/ceph-mon.a.log:2022-12-14T07:40:47.504+0000 7efc64c12700 0 log_channel(cluster) log [WRN] : Health check failed: Reduced data availability: 1 pg inactive, 1 pg peering (PG_AVAILABILITY)" in cluster log |
||||||||||||||
pass | 7115208 | 2022-12-13 23:19:19 | 2022-12-14 07:11:59 | 2022-12-14 07:43:27 | 0:31:28 | 0:25:05 | 0:06:23 | smithi | main | centos | 8.stream | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/osd-delay objectstore/bluestore-bitmap rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{centos_8} thrashers/minsize_recovery thrashosds-health workloads/ec-small-objects-balanced} | 2 | |
pass | 7115209 | 2022-12-13 23:19:20 | 2022-12-14 07:12:19 | 2022-12-14 07:31:27 | 0:19:08 | 0:11:57 | 0:07:11 | smithi | main | centos | 8.stream | rados/singleton/{all/admin-socket mon_election/connectivity msgr-failures/none msgr/async-v2only objectstore/bluestore-hybrid rados supported-random-distro$/{centos_8}} | 1 | |
pass | 7115210 | 2022-12-13 23:19:22 | 2022-12-14 07:12:40 | 2022-12-14 07:44:22 | 0:31:42 | 0:21:29 | 0:10:13 | smithi | main | ubuntu | 20.04 | rados/singleton-nomsgr/{all/multi-backfill-reject mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} | 2 | |
pass | 7115211 | 2022-12-13 23:19:23 | 2022-12-14 07:12:40 | 2022-12-14 09:51:28 | 2:38:48 | 2:33:15 | 0:05:33 | smithi | main | rhel | 8.6 | rados/objectstore/{backends/filestore-idempotent supported-random-distro$/{rhel_8}} | 1 | |
pass | 7115212 | 2022-12-13 23:19:24 | 2022-12-14 07:12:51 | 2022-12-14 07:42:01 | 0:29:10 | 0:22:36 | 0:06:34 | smithi | main | centos | 8.stream | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8} thrashers/none thrashosds-health workloads/cache-snaps-balanced} | 2 | |
pass | 7115213 | 2022-12-13 23:19:26 | 2022-12-14 07:12:51 | 2022-12-14 07:50:32 | 0:37:41 | 0:26:26 | 0:11:15 | smithi | main | ubuntu | 20.04 | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/osd-dispatch-delay objectstore/bluestore-comp-zlib rados recovery-overrides/{more-async-recovery} supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 | |
pass | 7115214 | 2022-12-13 23:19:27 | 2022-12-14 07:15:02 | 2022-12-14 07:44:27 | 0:29:25 | 0:22:28 | 0:06:57 | smithi | main | centos | 8.stream | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{centos_8} tasks/rados_workunit_loadgen_mix} | 2 | |
pass | 7115215 | 2022-12-13 23:19:28 | 2022-12-14 07:15:02 | 2022-12-14 07:34:31 | 0:19:29 | 0:09:16 | 0:10:13 | smithi | main | ubuntu | 20.04 | rados/cephadm/workunits/{0-distro/ubuntu_20.04 agent/off mon_election/classic task/test_adoption} | 1 | |
pass | 7115216 | 2022-12-13 23:19:29 | 2022-12-14 07:15:12 | 2022-12-14 07:48:35 | 0:33:23 | 0:23:30 | 0:09:53 | smithi | main | ubuntu | 20.04 | rados/perf/{ceph mon_election/connectivity objectstore/bluestore-stupid openstack scheduler/dmclock_default_shards settings/optimized ubuntu_latest workloads/radosbench_omap_write} | 1 | |
pass | 7115217 | 2022-12-13 23:19:31 | 2022-12-14 07:15:13 | 2022-12-14 07:46:32 | 0:31:19 | 0:20:35 | 0:10:44 | smithi | main | ubuntu | 20.04 | rados/singleton/{all/backfill-toofull mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 7115218 | 2022-12-13 23:19:32 | 2022-12-14 07:17:03 | 2022-12-14 07:44:26 | 0:27:23 | 0:17:57 | 0:09:26 | smithi | main | ubuntu | 20.04 | rados/singleton-nomsgr/{all/osd_stale_reads mon_election/classic rados supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 7115219 | 2022-12-13 23:19:33 | 2022-12-14 07:17:04 | 2022-12-14 07:35:50 | 0:18:46 | 0:11:11 | 0:07:35 | smithi | main | centos | 8.stream | rados/multimon/{clusters/21 mon_election/connectivity msgr-failures/many msgr/async-v1only no_pools objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8} tasks/mon_clock_no_skews} | 3 | |
fail | 7115220 | 2022-12-13 23:19:34 | 2022-12-14 07:18:45 | 2022-12-14 07:50:31 | 0:31:46 | 0:19:21 | 0:12:25 | smithi | main | ubuntu | 20.04 | rados/cephadm/osds/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-ops/rm-zap-flag} | 2 | |
Failure Reason:
"/var/log/ceph/bc561596-7b81-11ed-8441-001a4aab830c/ceph-mon.smithi134.log:2022-12-14T07:46:24.496+0000 7f672d5ea700 0 log_channel(cluster) log [WRN] : Health check failed: 1 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
pass | 7115221 | 2022-12-13 23:19:36 | 2022-12-14 07:21:06 | 2022-12-14 07:52:11 | 0:31:05 | 0:22:57 | 0:08:08 | smithi | main | centos | 8.stream | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8} thrashers/pggrow thrashosds-health workloads/cache-snaps} | 2 | |
pass | 7115222 | 2022-12-13 23:19:37 | 2022-12-14 07:22:46 | 2022-12-14 07:53:18 | 0:30:32 | 0:22:17 | 0:08:15 | smithi | main | centos | 8.stream | rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/2-size-2-min-size 1-install/pacific backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/crush-compat mon_election/classic msgr-failures/few rados thrashers/mapgap thrashosds-health workloads/rbd_cls} | 3 | |
pass | 7115223 | 2022-12-13 23:19:38 | 2022-12-14 07:23:57 | 2022-12-14 07:43:14 | 0:19:17 | 0:10:32 | 0:08:45 | smithi | main | ubuntu | 20.04 | rados/singleton/{all/deduptool mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-stupid rados supported-random-distro$/{ubuntu_latest}} | 1 | |
fail | 7115224 | 2022-12-13 23:19:40 | 2022-12-14 07:23:57 | 2022-12-14 08:00:07 | 0:36:10 | 0:27:28 | 0:08:42 | smithi | main | centos | 8.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zlib rados tasks/rados_api_tests validater/lockdep} | 2 | |
Failure Reason:
Command failed (workunit test rados/test.sh) on smithi052 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=570ede77572a3e5feb912523281e50b9e1e2539f TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 ALLOW_TIMEOUTS=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh' |
||||||||||||||
pass | 7115225 | 2022-12-13 23:19:41 | 2022-12-14 07:24:18 | 2022-12-14 07:47:26 | 0:23:08 | 0:12:25 | 0:10:43 | smithi | main | ubuntu | 20.04 | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/connectivity msgr-failures/osd-delay objectstore/bluestore-comp-zlib rados recovery-overrides/{more-async-recovery} supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 | |
pass | 7115226 | 2022-12-13 23:19:42 | 2022-12-14 07:26:19 | 2022-12-14 07:47:00 | 0:20:41 | 0:15:45 | 0:04:56 | smithi | main | rhel | 8.6 | rados/monthrash/{ceph clusters/9-mons mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-zlib rados supported-random-distro$/{rhel_8} thrashers/sync workloads/rados_5925} | 2 | |
fail | 7115227 | 2022-12-13 23:19:44 | 2022-12-14 07:26:29 | 2022-12-14 07:49:50 | 0:23:21 | 0:16:24 | 0:06:57 | smithi | main | centos | 8.stream | rados/cephadm/smoke/{0-distro/centos_8.stream_container_tools 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} | 2 | |
Failure Reason:
"/var/log/ceph/7883f0b2-7b82-11ed-8441-001a4aab830c/ceph-mon.c.log:2022-12-14T07:42:27.850+0000 7fb1d24e2700 7 mon.c@2(peon).log v96 update_from_paxos applying incremental log 96 2022-12-14T07:42:26.859756+0000 mon.a (mon.0) 349 : cluster [WRN] Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log |
||||||||||||||
pass | 7115228 | 2022-12-13 23:19:45 | 2022-12-14 07:26:39 | 2022-12-14 07:44:51 | 0:18:12 | 0:08:43 | 0:09:29 | smithi | main | ubuntu | 20.04 | rados/singleton-nomsgr/{all/pool-access mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 7115229 | 2022-12-13 23:19:46 | 2022-12-14 07:27:10 | 2022-12-14 07:55:04 | 0:27:54 | 0:19:33 | 0:08:21 | smithi | main | centos | 8.stream | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-dispatch-delay msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/cache} | 2 | |
pass | 7115230 | 2022-12-13 23:19:48 | 2022-12-14 07:27:40 | 2022-12-14 07:58:55 | 0:31:15 | 0:25:41 | 0:05:34 | smithi | main | rhel | 8.6 | rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/disable mon_election/classic random-objectstore$/{bluestore-stupid} supported-random-distro$/{rhel_8} tasks/progress} | 2 | |
pass | 7115231 | 2022-12-13 23:19:49 | 2022-12-14 07:27:51 | 2022-12-14 07:51:19 | 0:23:28 | 0:13:51 | 0:09:37 | smithi | main | centos | 8.stream | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/fastclose objectstore/bluestore-comp-zstd rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{centos_8} thrashers/fastread thrashosds-health workloads/ec-rados-plugin=lrc-k=4-m=2-l=3} | 3 | |
pass | 7115232 | 2022-12-13 23:19:50 | 2022-12-14 07:29:31 | 2022-12-14 07:50:46 | 0:21:15 | 0:14:22 | 0:06:53 | smithi | main | rhel | 8.6 | rados/singleton/{all/divergent_priors mon_election/classic msgr-failures/none msgr/async-v2only objectstore/filestore-xfs rados supported-random-distro$/{rhel_8}} | 1 | |
pass | 7115233 | 2022-12-13 23:19:52 | 2022-12-14 07:29:32 | 2022-12-14 07:54:31 | 0:24:59 | 0:17:48 | 0:07:11 | smithi | main | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools agent/on mon_election/connectivity task/test_cephadm} | 1 | |
pass | 7115234 | 2022-12-13 23:19:53 | 2022-12-14 07:29:52 | 2022-12-14 08:10:14 | 0:40:22 | 0:34:37 | 0:05:45 | smithi | main | rhel | 8.6 | rados/singleton-nomsgr/{all/recovery-unfound-found mon_election/classic rados supported-random-distro$/{rhel_8}} | 1 | |
pass | 7115235 | 2022-12-13 23:19:54 | 2022-12-14 07:29:52 | 2022-12-14 08:41:55 | 1:12:03 | 1:04:55 | 0:07:08 | smithi | main | centos | 8.stream | rados/standalone/{supported-random-distro$/{centos_8} workloads/misc} | 1 | |
pass | 7115236 | 2022-12-13 23:19:56 | 2022-12-14 07:30:13 | 2022-12-14 07:54:47 | 0:24:34 | 0:12:41 | 0:11:53 | smithi | main | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/dedup-io-mixed} | 2 | |
pass | 7115237 | 2022-12-13 23:19:57 | 2022-12-14 07:31:33 | 2022-12-14 08:08:14 | 0:36:41 | 0:29:32 | 0:07:09 | smithi | main | rhel | 8.6 | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8} tasks/rados_workunit_loadgen_mostlyread} | 2 | |
pass | 7115238 | 2022-12-13 23:19:58 | 2022-12-14 07:33:34 | 2022-12-14 07:52:47 | 0:19:13 | 0:10:12 | 0:09:01 | smithi | main | ubuntu | 20.04 | rados/perf/{ceph mon_election/classic objectstore/bluestore-basic-min-osd-mem-target openstack scheduler/wpq_default_shards settings/optimized ubuntu_latest workloads/sample_fio} | 1 | |
pass | 7115239 | 2022-12-13 23:20:00 | 2022-12-14 07:33:34 | 2022-12-14 08:09:28 | 0:35:54 | 0:25:56 | 0:09:58 | smithi | main | rhel | 8.6 | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-comp-lz4 rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{rhel_8} thrashers/morepggrow thrashosds-health workloads/ec-small-objects-fast-read} | 2 | |
pass | 7115240 | 2022-12-13 23:20:01 | 2022-12-14 07:35:15 | 2022-12-14 07:54:14 | 0:18:59 | 0:12:09 | 0:06:50 | smithi | main | rhel | 8.6 | rados/objectstore/{backends/fusestore supported-random-distro$/{rhel_8}} | 1 | |
pass | 7115241 | 2022-12-13 23:20:02 | 2022-12-14 07:35:15 | 2022-12-14 07:54:15 | 0:19:00 | 0:12:19 | 0:06:41 | smithi | main | centos | 8.stream | rados/singleton/{all/divergent_priors2 mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-bitmap rados supported-random-distro$/{centos_8}} | 1 | |
fail | 7115242 | 2022-12-13 23:20:04 | 2022-12-14 07:35:16 | 2022-12-14 07:57:02 | 0:21:46 | 0:14:27 | 0:07:19 | smithi | main | centos | 8.stream | rados/cephadm/osds/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-ops/rm-zap-wait} | 2 | |
Failure Reason:
"/var/log/ceph/c038d386-7b83-11ed-8441-001a4aab830c/ceph-mon.smithi169.log:2022-12-14T07:54:13.365+0000 7fad3bd8b700 0 log_channel(cluster) log [WRN] : Health check failed: 1 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
pass | 7115243 | 2022-12-13 23:20:05 | 2022-12-14 07:35:56 | 2022-12-14 08:18:55 | 0:42:59 | 0:37:16 | 0:05:43 | smithi | main | rhel | 8.6 | rados/singleton-bluestore/{all/cephtool mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8}} | 1 | |
pass | 7115244 | 2022-12-13 23:20:07 | 2022-12-14 07:35:57 | 2022-12-14 08:14:45 | 0:38:48 | 0:27:59 | 0:10:49 | smithi | main | centos | 8.stream | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/fastclose objectstore/bluestore-comp-zstd rados recovery-overrides/{more-active-recovery} supported-random-distro$/{centos_8} thrashers/mapgap thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 | |
pass | 7115245 | 2022-12-13 23:20:08 | 2022-12-14 07:39:38 | 2022-12-14 08:04:04 | 0:24:26 | 0:15:48 | 0:08:38 | smithi | main | centos | 8.stream | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{centos_8} thrashers/mapgap thrashosds-health workloads/dedup-io-snaps} | 2 | |
pass | 7115246 | 2022-12-13 23:20:09 | 2022-12-14 07:40:58 | 2022-12-14 08:03:06 | 0:22:08 | 0:11:08 | 0:11:00 | smithi | main | ubuntu | 20.04 | rados/singleton-nomsgr/{all/version-number-sanity mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 7115247 | 2022-12-13 23:20:11 | 2022-12-14 07:41:49 | 2022-12-14 08:13:40 | 0:31:51 | 0:20:58 | 0:10:53 | smithi | main | ubuntu | 20.04 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/fastclose rados recovery-overrides/{more-async-recovery} supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/ec-small-objects-fast-read-overwrites} | 2 | |
fail | 7115248 | 2022-12-13 23:20:12 | 2022-12-14 07:42:09 | 2022-12-14 08:06:20 | 0:24:11 | 0:16:12 | 0:07:59 | smithi | main | centos | 8.stream | rados/cephadm/smoke/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop agent/off fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
"/var/log/ceph/cb1f1868-7b84-11ed-8441-001a4aab830c/ceph-mon.c.log:2022-12-14T07:58:20.787+0000 7f3c7e796700 7 mon.c@2(synchronizing).log v58 update_from_paxos applying incremental log 57 2022-12-14T07:58:18.800147+0000 mon.a (mon.0) 173 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,b (MON_DOWN)" in cluster log |
||||||||||||||
pass | 7115249 | 2022-12-13 23:20:14 | 2022-12-14 07:44:16 | 2022-12-14 08:06:08 | 0:21:52 | 0:15:27 | 0:06:25 | smithi | main | rhel | 8.6 | rados/singleton/{all/dump-stuck mon_election/classic msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8}} | 1 | |
pass | 7115250 | 2022-12-13 23:20:15 | 2022-12-14 07:44:26 | 2022-12-14 08:02:02 | 0:17:36 | 0:11:05 | 0:06:31 | smithi | main | centos | 8.stream | rados/multimon/{clusters/3 mon_election/classic msgr-failures/few msgr/async-v2only no_pools objectstore/bluestore-hybrid rados supported-random-distro$/{centos_8} tasks/mon_clock_with_skews} | 2 | |
pass | 7115251 | 2022-12-13 23:20:16 | 2022-12-14 07:44:37 | 2022-12-14 08:21:37 | 0:37:00 | 0:31:15 | 0:05:45 | smithi | main | centos | 8.stream | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_8} thrashers/morepggrow thrashosds-health workloads/pool-snaps-few-objects} | 2 | |
fail | 7115252 | 2022-12-13 23:20:17 | 2022-12-14 07:44:37 | 2022-12-14 08:19:15 | 0:34:38 | 0:25:50 | 0:08:48 | smithi | main | centos | 8.stream | rados/dashboard/{0-single-container-host debug/mgr mon_election/connectivity random-objectstore$/{bluestore-comp-zstd} tasks/dashboard} | 2 | |
Failure Reason:
Test failure: test_full_health (tasks.mgr.dashboard.test_health.HealthTest) |
||||||||||||||
fail | 7115253 | 2022-12-13 23:20:19 | 2022-12-14 07:46:08 | 2022-12-14 08:03:42 | 0:17:34 | 0:08:18 | 0:09:16 | smithi | main | ubuntu | 20.04 | rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/radosbench cluster/1-node k8s/1.21 net/host rook/1.7.2} | 1 | |
Failure Reason:
Command failed on smithi203 with status 1: 'sudo kubeadm init --node-name smithi203 --token abcdef.x97figh0lpxoyv1r --pod-network-cidr 10.254.80.0/21' |
||||||||||||||
pass | 7115254 | 2022-12-13 23:20:20 | 2022-12-14 07:46:08 | 2022-12-14 08:13:12 | 0:27:04 | 0:20:51 | 0:06:13 | smithi | main | centos | 8.stream | rados/singleton-nomsgr/{all/admin_socket_output mon_election/connectivity rados supported-random-distro$/{centos_8}} | 1 | |
fail | 7115255 | 2022-12-13 23:20:21 | 2022-12-14 07:46:18 | 2022-12-14 08:27:42 | 0:41:24 | 0:35:28 | 0:05:56 | smithi | main | rhel | 8.6 | rados/upgrade/parallel/{0-random-distro$/{rhel_8.6_container_tools_3.0} 0-start 1-tasks mon_election/connectivity upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} | 2 | |
Failure Reason:
Command failed on smithi122 with status 128: 'rm -rf /home/ubuntu/cephtest/clone.client.0 && git clone --depth 1 --branch quincy https://github.com/chrisphoffman/ceph.git /home/ubuntu/cephtest/clone.client.0 && cd /home/ubuntu/cephtest/clone.client.0' |
||||||||||||||
pass | 7115256 | 2022-12-13 23:20:23 | 2022-12-14 07:46:39 | 2022-12-14 08:01:44 | 0:15:05 | 0:08:49 | 0:06:16 | smithi | main | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools_crun agent/off mon_election/classic task/test_cephadm_repos} | 1 | |
pass | 7115257 | 2022-12-13 23:20:24 | 2022-12-14 07:46:39 | 2022-12-14 08:54:11 | 1:07:32 | 1:01:39 | 0:05:53 | smithi | main | rhel | 8.6 | rados/singleton/{all/ec-inconsistent-hinfo mon_election/connectivity msgr-failures/none msgr/async-v2only objectstore/bluestore-comp-snappy rados supported-random-distro$/{rhel_8}} | 1 | |
pass | 7115258 | 2022-12-13 23:20:25 | 2022-12-14 07:46:39 | 2022-12-14 10:11:56 | 2:25:17 | 2:17:54 | 0:07:23 | smithi | main | centos | 8.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-zstd rados tasks/rados_cls_all validater/valgrind} | 2 | |
pass | 7115259 | 2022-12-13 23:20:26 | 2022-12-14 07:47:10 | 2022-12-14 08:10:32 | 0:23:22 | 0:17:08 | 0:06:14 | smithi | main | rhel | 8.6 | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/osd-dispatch-delay objectstore/bluestore-comp-zstd rados recovery-overrides/{default} supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 | |
fail | 7115260 | 2022-12-13 23:20:28 | 2022-12-14 07:47:30 | 2022-12-14 08:24:54 | 0:37:24 | 0:27:36 | 0:09:48 | smithi | main | ubuntu | 20.04 | rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/mon-delay msgr/async-v1only objectstore/bluestore-comp-zstd rados supported-random-distro$/{ubuntu_latest} thrashers/force-sync-many workloads/rados_api_tests} | 2 | |
Failure Reason:
Command failed (workunit test rados/test.sh) on smithi196 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=570ede77572a3e5feb912523281e50b9e1e2539f TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh' |
||||||||||||||
pass | 7115261 | 2022-12-13 23:20:29 | 2022-12-14 07:47:41 | 2022-12-14 08:07:37 | 0:19:56 | 0:08:51 | 0:11:05 | smithi | main | ubuntu | 20.04 | rados/perf/{ceph mon_election/connectivity objectstore/bluestore-bitmap openstack scheduler/dmclock_1Shard_16Threads settings/optimized ubuntu_latest workloads/sample_radosbench} | 1 | |
fail | 7115262 | 2022-12-13 23:20:30 | 2022-12-14 07:47:41 | 2022-12-14 08:30:41 | 0:43:00 | 0:36:42 | 0:06:18 | smithi | main | centos | 8.stream | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-dispatch-delay msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{centos_8} thrashers/none thrashosds-health workloads/rados_api_tests} | 2 | |
Failure Reason:
Command failed (workunit test rados/test.sh) on smithi003 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=570ede77572a3e5feb912523281e50b9e1e2539f TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh' |
||||||||||||||
fail | 7115263 | 2022-12-13 23:20:32 | 2022-12-14 07:47:42 | 2022-12-14 08:09:41 | 0:21:59 | 0:13:43 | 0:08:16 | smithi | main | centos | 8.stream | rados/cephadm/smoke-singlehost/{0-random-distro$/{centos_8.stream_container_tools} 1-start 2-services/basic 3-final} | 1 | |
Failure Reason:
"/var/log/ceph/938c5ec8-7b85-11ed-8441-001a4aab830c/ceph-mon.smithi062.log:2022-12-14T08:04:48.525+0000 7f77450bb700 0 log_channel(cluster) log [WRN] : Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)" in cluster log |
||||||||||||||
pass | 7115264 | 2022-12-13 23:20:33 | 2022-12-14 07:48:42 | 2022-12-14 08:12:55 | 0:24:13 | 0:15:52 | 0:08:21 | smithi | main | rhel | 8.6 | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{rhel_8} tasks/readwrite} | 2 | |
pass | 7115265 | 2022-12-13 23:20:34 | 2022-12-14 07:49:53 | 2022-12-14 08:09:36 | 0:19:43 | 0:12:55 | 0:06:48 | smithi | main | centos | 8.stream | rados/singleton-nomsgr/{all/balancer mon_election/classic rados supported-random-distro$/{centos_8}} | 1 | |
pass | 7115266 | 2022-12-13 23:20:36 | 2022-12-14 07:50:33 | 2022-12-14 08:27:32 | 0:36:59 | 0:31:02 | 0:05:57 | smithi | main | rhel | 8.6 | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/few objectstore/bluestore-hybrid rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{rhel_8} thrashers/mapgap thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} | 3 | |
pass | 7115267 | 2022-12-13 23:20:37 | 2022-12-14 07:50:33 | 2022-12-14 08:48:15 | 0:57:42 | 0:51:32 | 0:06:10 | smithi | main | rhel | 8.6 | rados/singleton/{all/ec-lost-unfound mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-zlib rados supported-random-distro$/{rhel_8}} | 1 | |
pass | 7115268 | 2022-12-13 23:20:39 | 2022-12-14 07:50:54 | 2022-12-14 08:12:30 | 0:21:36 | 0:15:00 | 0:06:36 | smithi | main | centos | 8.stream | rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/enable mon_election/connectivity random-objectstore$/{bluestore-comp-snappy} supported-random-distro$/{centos_8} tasks/prometheus} | 2 | |
fail | 7115269 | 2022-12-13 23:20:40 | 2022-12-14 07:51:24 | 2022-12-14 08:37:11 | 0:45:47 | 0:37:23 | 0:08:24 | smithi | main | centos | 8.stream | rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/3-size-2-min-size 1-install/quincy backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/on mon_election/connectivity msgr-failures/osd-delay rados thrashers/morepggrow thrashosds-health workloads/snaps-few-objects} | 3 | |
Failure Reason:
"/var/log/ceph/71a1d940-7b86-11ed-8441-001a4aab830c/ceph-mon.a.log:2022-12-14T08:10:24.934+0000 7f667c304700 0 log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 1/3 mons down, quorum a,c" in cluster log |
||||||||||||||
fail | 7115270 | 2022-12-13 23:20:41 | 2022-12-14 07:51:25 | 2022-12-14 08:13:40 | 0:22:15 | 0:15:08 | 0:07:07 | smithi | main | centos | 8.stream | rados/cephadm/osds/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop 1-start 2-ops/rmdir-reactivate} | 2 | |
Failure Reason:
"/var/log/ceph/204f2e6c-7b86-11ed-8441-001a4aab830c/ceph-mon.smithi079.log:2022-12-14T08:10:50.717+0000 7fec6e1ea700 0 log_channel(cluster) log [WRN] : Health check failed: 1 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
pass | 7115271 | 2022-12-13 23:20:42 | 2022-12-14 07:52:15 | 2022-12-14 08:15:22 | 0:23:07 | 0:16:01 | 0:07:06 | smithi | main | centos | 8.stream | rados/objectstore/{backends/keyvaluedb supported-random-distro$/{centos_8}} | 1 | |
pass | 7115272 | 2022-12-13 23:20:44 | 2022-12-14 07:52:16 | 2022-12-14 08:36:59 | 0:44:43 | 0:36:10 | 0:08:33 | smithi | main | centos | 8.stream | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{default} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async objectstore/filestore-xfs rados supported-random-distro$/{centos_8} thrashers/pggrow thrashosds-health workloads/radosbench-high-concurrency} | 2 | |
pass | 7115273 | 2022-12-13 23:20:45 | 2022-12-14 07:53:27 | 2022-12-14 08:27:13 | 0:33:46 | 0:27:03 | 0:06:43 | smithi | main | rhel | 8.6 | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/fastclose objectstore/bluestore-comp-snappy rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{rhel_8} thrashers/pggrow thrashosds-health workloads/ec-small-objects-many-deletes} | 2 | |
pass | 7115274 | 2022-12-13 23:20:46 | 2022-12-14 07:53:27 | 2022-12-14 08:15:08 | 0:21:41 | 0:14:01 | 0:07:40 | smithi | main | centos | 8.stream | rados/singleton-nomsgr/{all/cache-fs-trunc mon_election/connectivity rados supported-random-distro$/{centos_8}} | 1 | |
pass | 7115275 | 2022-12-13 23:20:48 | 2022-12-14 07:54:17 | 2022-12-14 09:19:17 | 1:25:00 | 1:17:44 | 0:07:16 | smithi | main | centos | 8.stream | rados/standalone/{supported-random-distro$/{centos_8} workloads/mon} | 1 | |
pass | 7115276 | 2022-12-13 23:20:49 | 2022-12-14 07:54:18 | 2022-12-14 08:11:19 | 0:17:01 | 0:07:28 | 0:09:33 | smithi | main | ubuntu | 20.04 | rados/singleton/{all/erasure-code-nonregression mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-zstd rados supported-random-distro$/{ubuntu_latest}} | 1 | |
fail | 7115277 | 2022-12-13 23:20:50 | 2022-12-14 07:54:38 | 2022-12-14 08:19:49 | 0:25:11 | 0:19:16 | 0:05:55 | smithi | main | rhel | 8.6 | rados/cephadm/smoke/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} | 2 | |
Failure Reason:
"/var/log/ceph/953dd354-7b86-11ed-8441-001a4aab830c/ceph-mon.c.log:2022-12-14T08:11:46.172+0000 7f84785d5700 7 mon.c@2(peon).log v72 update_from_paxos applying incremental log 72 2022-12-14T08:11:45.582088+0000 mon.a (mon.0) 250 : cluster [WRN] Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log |
||||||||||||||
dead | 7115278 | 2022-12-13 23:20:52 | 2022-12-14 07:54:49 | 2022-12-14 08:14:28 | 0:19:39 | smithi | main | rhel | 8.6 | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few objectstore/bluestore-hybrid rados recovery-overrides/{more-active-recovery} supported-random-distro$/{rhel_8} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 | |||
Failure Reason:
Error reimaging machines: reached maximum tries (100) after waiting for 600 seconds |
||||||||||||||
pass | 7115279 | 2022-12-13 23:20:53 | 2022-12-14 07:55:09 | 2022-12-14 09:06:14 | 1:11:05 | 1:03:21 | 0:07:44 | smithi | main | rhel | 8.6 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/radosbench} | 2 | |
pass | 7115280 | 2022-12-13 23:20:55 | 2022-12-14 07:57:10 | 2022-12-14 09:30:40 | 1:33:30 | 1:26:11 | 0:07:19 | smithi | main | centos | 8.stream | rados/singleton/{all/lost-unfound-delete mon_election/classic msgr-failures/none msgr/async-v2only objectstore/bluestore-hybrid rados supported-random-distro$/{centos_8}} | 1 | |
pass | 7115281 | 2022-12-13 23:20:56 | 2022-12-14 07:57:11 | 2022-12-14 08:18:25 | 0:21:14 | 0:09:00 | 0:12:14 | smithi | main | ubuntu | 20.04 | rados/singleton-nomsgr/{all/ceph-kvstore-tool mon_election/classic rados supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 7115282 | 2022-12-13 23:20:57 | 2022-12-14 07:59:01 | 2022-12-14 08:18:21 | 0:19:20 | 0:10:14 | 0:09:06 | smithi | main | ubuntu | 20.04 | rados/perf/{ceph mon_election/classic objectstore/bluestore-comp openstack scheduler/dmclock_default_shards settings/optimized ubuntu_latest workloads/fio_4K_rand_read} | 1 | |
fail | 7115283 | 2022-12-13 23:20:59 | 2022-12-14 07:59:02 | 2022-12-14 08:13:07 | 0:14:05 | 0:06:58 | 0:07:07 | smithi | main | centos | 8.stream | rados/cephadm/workunits/{0-distro/rhel_8.6_container_tools_3.0 agent/on mon_election/connectivity task/test_iscsi_pids_limit/{centos_8.stream_container_tools test_iscsi_pids_limit}} | 1 | |
Failure Reason:
Command failed on smithi119 with status 1: 'TESTDIR=/home/ubuntu/cephtest bash -s' |
||||||||||||||
pass | 7115284 | 2022-12-13 23:21:00 | 2022-12-14 08:00:03 | 2022-12-14 08:24:14 | 0:24:11 | 0:12:57 | 0:11:14 | smithi | main | ubuntu | 20.04 | rados/multimon/{clusters/6 mon_election/connectivity msgr-failures/many msgr/async no_pools objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{ubuntu_latest} tasks/mon_recovery} | 2 | |
pass | 7115285 | 2022-12-13 23:21:01 | 2022-12-14 08:00:13 | 2022-12-14 08:25:59 | 0:25:46 | 0:16:39 | 0:09:07 | smithi | main | ubuntu | 20.04 | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{ubuntu_latest} tasks/repair_test} | 2 | |
pass | 7115286 | 2022-12-13 23:21:03 | 2022-12-14 08:00:34 | 2022-12-14 08:25:34 | 0:25:00 | 0:16:50 | 0:08:10 | smithi | main | centos | 8.stream | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{default} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/redirect} | 2 | |
pass | 7115287 | 2022-12-13 23:21:04 | 2022-12-14 08:02:04 | 2022-12-14 08:24:30 | 0:22:26 | 0:14:42 | 0:07:44 | smithi | main | centos | 8.stream | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/connectivity msgr-failures/fastclose objectstore/bluestore-hybrid rados recovery-overrides/{default} supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 | |
pass | 7115288 | 2022-12-13 23:21:05 | 2022-12-14 08:03:15 | 2022-12-14 08:25:12 | 0:21:57 | 0:14:13 | 0:07:44 | smithi | main | centos | 8.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-hybrid rados tasks/mon_recovery validater/lockdep} | 2 | |
pass | 7115289 | 2022-12-13 23:21:07 | 2022-12-14 08:04:06 | 2022-12-14 08:59:07 | 0:55:01 | 0:48:46 | 0:06:15 | smithi | main | centos | 8.stream | rados/singleton/{all/lost-unfound mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_8}} | 1 | |
pass | 7115290 | 2022-12-13 23:21:08 | 2022-12-14 08:04:06 | 2022-12-14 09:08:46 | 1:04:40 | 0:57:20 | 0:07:20 | smithi | main | centos | 8.stream | rados/monthrash/{ceph clusters/9-mons mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-hybrid rados supported-random-distro$/{centos_8} thrashers/many workloads/rados_mon_osdmap_prune} | 2 | |
fail | 7115291 | 2022-12-13 23:21:09 | 2022-12-14 08:05:37 | 2022-12-14 08:33:59 | 0:28:22 | 0:19:37 | 0:08:45 | smithi | main | rhel | 8.6 | rados/cephadm/workunits/{0-distro/rhel_8.6_container_tools_rhel8 agent/off mon_election/classic task/test_nfs} | 1 | |
Failure Reason:
Test failure: test_cluster_set_reset_user_config (tasks.cephfs.test_nfs.TestNFS) |
||||||||||||||
fail | 7115292 | 2022-12-13 23:21:11 | 2022-12-14 08:06:17 | 2022-12-14 08:27:09 | 0:20:52 | 0:14:11 | 0:06:41 | smithi | main | rhel | 8.6 | rados/singleton-nomsgr/{all/ceph-post-file mon_election/connectivity rados supported-random-distro$/{rhel_8}} | 1 | |
Failure Reason:
Command failed (workunit test post-file.sh) on smithi085 with status 255: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=570ede77572a3e5feb912523281e50b9e1e2539f TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/post-file.sh' |
||||||||||||||
pass | 7115293 | 2022-12-13 23:21:12 | 2022-12-14 08:06:27 | 2022-12-14 08:38:17 | 0:31:50 | 0:23:46 | 0:08:04 | smithi | main | centos | 8.stream | rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/few rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{centos_8} thrashers/fastread thrashosds-health workloads/ec-small-objects-overwrites} | 2 | |
pass | 7115294 | 2022-12-13 23:21:13 | 2022-12-14 08:07:18 | 2022-12-14 08:33:26 | 0:26:08 | 0:18:49 | 0:07:19 | smithi | main | centos | 8.stream | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{default} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-dispatch-delay msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8} thrashers/mapgap thrashosds-health workloads/redirect_promote_tests} | 2 | |
pass | 7115295 | 2022-12-13 23:21:15 | 2022-12-14 08:07:38 | 2022-12-14 08:31:50 | 0:24:12 | 0:11:34 | 0:12:38 | smithi | main | ubuntu | 20.04 | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/osd-delay objectstore/bluestore-low-osd-mem-target rados recovery-overrides/{default} supported-random-distro$/{ubuntu_latest} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=lrc-k=4-m=2-l=3} | 3 | |
pass | 7115296 | 2022-12-13 23:21:16 | 2022-12-14 08:09:29 | 2022-12-14 08:30:39 | 0:21:10 | 0:13:56 | 0:07:14 | smithi | main | centos | 8.stream | rados/cephadm/osds/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop 1-start 2-ops/repave-all} | 2 | |
pass | 7115297 | 2022-12-13 23:21:18 | 2022-12-14 08:09:39 | 2022-12-14 08:28:38 | 0:18:59 | 0:13:25 | 0:05:34 | smithi | main | rhel | 8.6 | rados/singleton/{all/max-pg-per-osd.from-mon mon_election/classic msgr-failures/many msgr/async-v1only objectstore/bluestore-stupid rados supported-random-distro$/{rhel_8}} | 1 | |
pass | 7115298 | 2022-12-13 23:21:19 | 2022-12-14 08:09:50 | 2022-12-14 08:44:47 | 0:34:57 | 0:28:55 | 0:06:02 | smithi | main | rhel | 8.6 | rados/objectstore/{backends/objectcacher-stress supported-random-distro$/{rhel_8}} | 1 | |
pass | 7115299 | 2022-12-13 23:21:21 | 2022-12-14 08:10:20 | 2022-12-14 08:29:30 | 0:19:10 | 0:13:27 | 0:05:43 | smithi | main | rhel | 8.6 | rados/singleton-nomsgr/{all/crushdiff mon_election/classic rados supported-random-distro$/{rhel_8}} | 1 | |
pass | 7115300 | 2022-12-13 23:21:22 | 2022-12-14 08:10:41 | 2022-12-14 08:29:47 | 0:19:06 | 0:09:08 | 0:09:58 | smithi | main | ubuntu | 20.04 | rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/disable mon_election/classic random-objectstore$/{bluestore-comp-lz4} supported-random-distro$/{ubuntu_latest} tasks/workunits} | 2 | |
pass | 7115301 | 2022-12-13 23:21:23 | 2022-12-14 08:10:41 | 2022-12-14 08:42:04 | 0:31:23 | 0:25:17 | 0:06:06 | smithi | main | rhel | 8.6 | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/few objectstore/bluestore-comp-zlib rados recovery-overrides/{more-async-recovery} supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/ec-small-objects} | 2 | |
pass | 7115302 | 2022-12-13 23:21:25 | 2022-12-14 08:11:22 | 2022-12-14 08:34:38 | 0:23:16 | 0:16:46 | 0:06:30 | smithi | main | centos | 8.stream | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_8} thrashers/morepggrow thrashosds-health workloads/redirect_set_object} | 2 | |
fail | 7115303 | 2022-12-13 23:21:26 | 2022-12-14 08:11:32 | 2022-12-14 08:37:33 | 0:26:01 | 0:17:44 | 0:08:17 | smithi | main | rhel | 8.6 | rados/cephadm/smoke/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop agent/off fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
"/var/log/ceph/0d031866-7b89-11ed-8441-001a4aab830c/ceph-mon.c.log:2022-12-14T08:28:53.619+0000 7f647ca72700 7 mon.c@2(synchronizing).log v59 update_from_paxos applying incremental log 58 2022-12-14T08:28:51.648078+0000 mon.a (mon.0) 175 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,b (MON_DOWN)" in cluster log |
||||||||||||||
pass | 7115304 | 2022-12-13 23:21:27 | 2022-12-14 08:12:33 | 2022-12-14 08:32:45 | 0:20:12 | 0:09:57 | 0:10:15 | smithi | main | ubuntu | 20.04 | rados/perf/{ceph mon_election/connectivity objectstore/bluestore-low-osd-mem-target openstack scheduler/wpq_default_shards settings/optimized ubuntu_latest workloads/fio_4K_rand_rw} | 1 | |
pass | 7115305 | 2022-12-13 23:21:29 | 2022-12-14 08:34:07 | 889 | smithi | main | centos | 8.stream | rados/singleton/{all/max-pg-per-osd.from-primary mon_election/connectivity msgr-failures/none msgr/async-v2only objectstore/filestore-xfs rados supported-random-distro$/{centos_8}} | 1 | ||||
pass | 7115306 | 2022-12-13 23:21:30 | 2022-12-14 08:13:03 | 2022-12-14 08:32:09 | 0:19:06 | 0:12:51 | 0:06:15 | smithi | main | rhel | 8.6 | rados/singleton-nomsgr/{all/export-after-evict mon_election/connectivity rados supported-random-distro$/{rhel_8}} | 1 | |
pass | 7115307 | 2022-12-13 23:21:31 | 2022-12-14 08:13:14 | 2022-12-14 12:48:10 | 4:34:56 | 4:27:42 | 0:07:14 | smithi | main | centos | 8.stream | rados/standalone/{supported-random-distro$/{centos_8} workloads/osd-backfill} | 1 | |
pass | 7115308 | 2022-12-13 23:21:33 | 2022-12-14 08:13:14 | 2022-12-14 08:40:51 | 0:27:37 | 0:20:55 | 0:06:42 | smithi | main | centos | 8.stream | rados/valgrind-leaks/{1-start 2-inject-leak/osd centos_latest} | 1 | |
pass | 7115309 | 2022-12-13 23:21:34 | 2022-12-14 08:13:45 | 2022-12-14 08:45:05 | 0:31:20 | 0:22:27 | 0:08:53 | smithi | main | ubuntu | 20.04 | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/osd-delay objectstore/bluestore-low-osd-mem-target rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/none thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 | |
fail | 7115310 | 2022-12-13 23:21:36 | 2022-12-14 08:13:45 | 2022-12-14 08:43:15 | 0:29:30 | 0:19:25 | 0:10:05 | smithi | main | ubuntu | 20.04 | rados/cephadm/workunits/{0-distro/ubuntu_20.04 agent/on mon_election/connectivity task/test_orch_cli} | 1 | |
Failure Reason:
"/var/log/ceph/688794aa-7b89-11ed-8441-001a4aab830c/ceph-mon.a.log:2022-12-14T08:33:40.558+0000 7ff817877700 0 log_channel(cluster) log [WRN] : Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log |
||||||||||||||
pass | 7115311 | 2022-12-13 23:21:37 | 2022-12-14 08:13:45 | 2022-12-14 08:35:55 | 0:22:10 | 0:14:40 | 0:07:30 | smithi | main | rhel | 8.6 | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{rhel_8} tasks/scrub_test} | 2 | |
dead | 7115312 | 2022-12-13 23:21:39 | 2022-12-14 08:14:46 | 2022-12-14 08:34:12 | 0:19:26 | smithi | main | rhel | 8.6 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{default} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{rhel_8} thrashers/none thrashosds-health workloads/set-chunks-read} | 2 | |||
Failure Reason:
Error reimaging machines: reached maximum tries (100) after waiting for 600 seconds |
||||||||||||||
pass | 7115313 | 2022-12-13 23:21:40 | 2022-12-14 08:14:46 | 2022-12-14 08:43:33 | 0:28:47 | 0:19:35 | 0:09:12 | smithi | main | centos | 8.stream | rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus-v1only backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/crush-compat mon_election/classic msgr-failures/fastclose rados thrashers/none thrashosds-health workloads/test_rbd_api} | 3 | |
pass | 7115314 | 2022-12-13 23:21:42 | 2022-12-14 08:16:17 | 2022-12-14 08:39:27 | 0:23:10 | 0:14:56 | 0:08:14 | smithi | main | centos | 8.stream | rados/singleton/{all/max-pg-per-osd.from-replica mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-bitmap rados supported-random-distro$/{centos_8}} | 1 | |
pass | 7115315 | 2022-12-13 23:21:43 | 2022-12-14 08:18:28 | 2022-12-14 08:36:36 | 0:18:08 | 0:07:54 | 0:10:14 | smithi | main | ubuntu | 20.04 | rados/multimon/{clusters/9 mon_election/classic msgr-failures/few msgr/async-v1only no_pools objectstore/bluestore-stupid rados supported-random-distro$/{ubuntu_latest} tasks/mon_clock_no_skews} | 3 | |
pass | 7115316 | 2022-12-13 23:21:44 | 2022-12-14 08:19:18 | 2022-12-14 08:36:37 | 0:17:19 | 0:08:42 | 0:08:37 | smithi | main | ubuntu | 20.04 | rados/singleton-nomsgr/{all/full-tiering mon_election/classic rados supported-random-distro$/{ubuntu_latest}} | 1 | |
fail | 7115317 | 2022-12-13 23:21:46 | 2022-12-14 08:19:19 | 2022-12-14 08:55:16 | 0:35:57 | 0:24:47 | 0:11:10 | smithi | main | ubuntu | 20.04 | rados/cephadm/smoke/{0-distro/ubuntu_20.04 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} | 2 | |
Failure Reason:
"/var/log/ceph/00080f8a-7b8a-11ed-8441-001a4aab830c/ceph-mon.a.log:2022-12-14T08:42:39.141+0000 7fdef47a4700 0 log_channel(cluster) log [WRN] : Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log |
||||||||||||||
pass | 7115318 | 2022-12-13 23:21:47 | 2022-12-14 08:19:59 | 2022-12-14 08:46:09 | 0:26:10 | 0:15:10 | 0:11:00 | smithi | main | centos | 8.stream | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/few objectstore/bluestore-low-osd-mem-target rados recovery-overrides/{more-active-recovery} supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 | |
pass | 7115319 | 2022-12-13 23:21:49 | 2022-12-14 08:24:20 | 2022-12-14 09:05:49 | 0:41:29 | 0:32:09 | 0:09:20 | smithi | main | ubuntu | 20.04 | rados/singleton-bluestore/{all/cephtool mon_election/classic msgr-failures/many msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 7115320 | 2022-12-13 23:21:50 | 2022-12-14 08:24:31 | 2022-12-14 08:55:37 | 0:31:06 | 0:25:14 | 0:05:52 | smithi | main | rhel | 8.6 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{rhel_8} thrashers/pggrow thrashosds-health workloads/small-objects-balanced} | 2 | |
pass | 7115321 | 2022-12-13 23:21:51 | 2022-12-14 08:24:31 | 2022-12-14 09:23:46 | 0:59:15 | 0:52:00 | 0:07:15 | smithi | main | centos | 8.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-low-osd-mem-target rados tasks/rados_api_tests validater/valgrind} | 2 | |
pass | 7115322 | 2022-12-13 23:21:53 | 2022-12-14 08:25:01 | 2022-12-14 09:04:22 | 0:39:21 | 0:33:06 | 0:06:15 | smithi | main | centos | 8.stream | rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/mon-delay msgr/async objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_8} thrashers/one workloads/rados_mon_workunits} | 2 | |
pass | 7115323 | 2022-12-13 23:21:54 | 2022-12-14 08:25:22 | 2022-12-14 08:46:21 | 0:20:59 | 0:14:25 | 0:06:34 | smithi | main | centos | 8.stream | rados/singleton/{all/mon-auth-caps mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8}} | 1 | |
fail | 7115324 | 2022-12-13 23:21:56 | 2022-12-14 08:25:22 | 2022-12-14 08:49:18 | 0:23:56 | 0:16:51 | 0:07:05 | smithi | main | rhel | 8.6 | rados/cephadm/osds/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop 1-start 2-ops/rm-zap-add} | 2 | |
Failure Reason:
"/var/log/ceph/fb567c28-7b8a-11ed-8441-001a4aab830c/ceph-mon.smithi035.log:2022-12-14T08:46:33.418+0000 7fc0c23d2700 0 log_channel(cluster) log [WRN] : Health check failed: 1 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
pass | 7115325 | 2022-12-13 23:21:57 | 2022-12-14 08:25:43 | 2022-12-14 08:46:25 | 0:20:42 | 0:09:42 | 0:11:00 | smithi | main | ubuntu | 20.04 | rados/singleton-nomsgr/{all/health-warnings mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 7115326 | 2022-12-13 23:21:59 | 2022-12-14 08:26:03 | 2022-12-14 08:44:59 | 0:18:56 | 0:09:57 | 0:08:59 | smithi | main | ubuntu | 20.04 | rados/perf/{ceph mon_election/classic objectstore/bluestore-stupid openstack scheduler/dmclock_1Shard_16Threads settings/optimized ubuntu_latest workloads/fio_4M_rand_read} | 1 | |
fail | 7115327 | 2022-12-13 23:22:00 | 2022-12-14 08:26:03 | 2022-12-14 11:00:25 | 2:34:22 | 2:12:27 | 0:21:55 | smithi | main | rhel | 8.6 | rados/objectstore/{backends/objectstore-bluestore-a supported-random-distro$/{rhel_8}} | 1 | |
Failure Reason:
Command failed on smithi080 with status 1: 'sudo TESTDIR=/home/ubuntu/cephtest bash -c \'mkdir $TESTDIR/archive/ostest && cd $TESTDIR/archive/ostest && ulimit -Sn 16384 && CEPH_ARGS="--no-log-to-stderr --log-file $TESTDIR/archive/ceph_test_objectstore.log --debug-bluestore 20" ceph_test_objectstore --gtest_filter=*/2:-*SyntheticMatrixC* --gtest_catch_exceptions=0\'' |
||||||||||||||
pass | 7115328 | 2022-12-13 23:22:02 | 2022-12-14 08:27:14 | 2022-12-14 09:10:45 | 0:43:31 | 0:37:05 | 0:06:26 | smithi | main | rhel | 8.6 | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-stupid rados recovery-overrides/{default} supported-random-distro$/{rhel_8} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} | 3 | |
pass | 7115329 | 2022-12-13 23:22:03 | 2022-12-14 08:27:34 | 2022-12-14 08:58:50 | 0:31:16 | 0:24:10 | 0:07:06 | smithi | main | centos | 8.stream | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-dispatch-delay msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/small-objects-localized} | 2 | |
pass | 7115330 | 2022-12-13 23:22:04 | 2022-12-14 08:27:35 | 2022-12-14 08:46:43 | 0:19:08 | 0:10:16 | 0:08:52 | smithi | main | ubuntu | 20.04 | rados/singleton/{all/mon-config-key-caps mon_election/classic msgr-failures/none msgr/async-v2only objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest}} | 1 | |
fail | 7115331 | 2022-12-13 23:22:06 | 2022-12-14 08:27:35 | 2022-12-14 09:03:02 | 0:35:27 | 0:26:25 | 0:09:02 | smithi | main | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools agent/off mon_election/classic task/test_orch_cli_mon} | 5 | |
Failure Reason:
"/var/log/ceph/dd31fba4-7b8b-11ed-8441-001a4aab830c/ceph-mon.a.log:2022-12-14T08:49:44.561+0000 7f5a78935700 0 log_channel(cluster) log [WRN] : Health check failed: 2/5 mons down, quorum a,e,c (MON_DOWN)" in cluster log |
||||||||||||||
pass | 7115332 | 2022-12-13 23:22:07 | 2022-12-14 08:29:56 | 2022-12-14 09:23:46 | 0:53:50 | 0:46:47 | 0:07:03 | smithi | main | rhel | 8.6 | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/osd-delay objectstore/bluestore-comp-zstd rados recovery-overrides/{default} supported-random-distro$/{rhel_8} thrashers/default thrashosds-health workloads/ec-rados-plugin=clay-k=4-m=2} | 2 | |
fail | 7115333 | 2022-12-13 23:22:09 | 2022-12-14 08:30:46 | 2022-12-14 09:01:47 | 0:31:01 | 0:24:09 | 0:06:52 | smithi | main | centos | 8.stream | rados/dashboard/{0-single-container-host debug/mgr mon_election/classic random-objectstore$/{bluestore-comp-snappy} tasks/e2e} | 2 | |
Failure Reason:
"/var/log/ceph/dc7db05e-7b8b-11ed-8441-001a4aab830c/ceph-mon.a.log:2022-12-14T08:58:23.849+0000 7fc469c10700 0 log_channel(cluster) log [WRN] : Health check failed: Degraded data redundancy: 2/6 objects degraded (33.333%), 1 pg degraded (PG_DEGRADED)" in cluster log |
||||||||||||||
pass | 7115334 | 2022-12-13 23:22:10 | 2022-12-14 08:30:46 | 2022-12-14 08:51:06 | 0:20:20 | 0:10:34 | 0:09:46 | smithi | main | ubuntu | 20.04 | rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/enable mon_election/connectivity random-objectstore$/{bluestore-comp-zstd} supported-random-distro$/{ubuntu_latest} tasks/crash} | 2 | |
fail | 7115335 | 2022-12-13 23:22:11 | 2022-12-14 08:31:57 | 2022-12-14 08:52:34 | 0:20:37 | 0:08:13 | 0:12:24 | smithi | main | ubuntu | 20.04 | rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/none cluster/3-node k8s/1.21 net/calico rook/master} | 3 | |
Failure Reason:
Command failed on smithi008 with status 1: 'sudo kubeadm init --node-name smithi008 --token abcdef.lbtb84t5yvzdbkg0 --pod-network-cidr 10.248.56.0/21' |
||||||||||||||
pass | 7115336 | 2022-12-13 23:22:13 | 2022-12-14 08:31:58 | 2022-12-14 08:50:39 | 0:18:41 | 0:12:14 | 0:06:27 | smithi | main | centos | 8.stream | rados/singleton-nomsgr/{all/large-omap-object-warnings mon_election/classic rados supported-random-distro$/{centos_8}} | 1 | |
pass | 7115337 | 2022-12-13 23:22:15 | 2022-12-14 08:31:58 | 2022-12-14 08:55:12 | 0:23:14 | 0:15:55 | 0:07:19 | smithi | main | centos | 8.stream | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{centos_8} tasks/libcephsqlite} | 2 | |
pass | 7115338 | 2022-12-13 23:22:16 | 2022-12-14 08:32:18 | 2022-12-14 09:05:39 | 0:33:21 | 0:22:35 | 0:10:46 | smithi | main | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/small-objects} | 2 | |
pass | 7115339 | 2022-12-13 23:22:18 | 2022-12-14 08:33:29 | 2022-12-14 09:10:44 | 0:37:15 | 0:29:43 | 0:07:32 | smithi | main | rhel | 8.6 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/osd-delay rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{rhel_8} thrashers/minsize_recovery thrashosds-health workloads/ec-snaps-few-objects-overwrites} | 2 | |
fail | 7115340 | 2022-12-13 23:22:19 | 2022-12-14 08:34:09 | 2022-12-14 08:57:47 | 0:23:38 | 0:16:30 | 0:07:08 | smithi | main | centos | 8.stream | rados/cephadm/smoke/{0-distro/centos_8.stream_container_tools 0-nvme-loop agent/on fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
"/var/log/ceph/f7de7928-7b8b-11ed-8441-001a4aab830c/ceph-mon.c.log:2022-12-14T08:50:32.075+0000 7f5b7e537700 7 mon.c@2(peon).log v96 update_from_paxos applying incremental log 96 2022-12-14T08:50:31.787244+0000 mon.a (mon.0) 350 : cluster [WRN] Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log |
||||||||||||||
pass | 7115341 | 2022-12-13 23:22:20 | 2022-12-14 08:34:30 | 2022-12-14 08:57:37 | 0:23:07 | 0:17:19 | 0:05:48 | smithi | main | centos | 8.stream | rados/singleton/{all/mon-config-keys mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_8}} | 1 | |
pass | 7115342 | 2022-12-13 23:22:22 | 2022-12-14 08:34:30 | 2022-12-14 09:09:48 | 0:35:18 | 0:27:41 | 0:07:37 | smithi | main | centos | 8.stream | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/osd-dispatch-delay objectstore/bluestore-stupid rados recovery-overrides/{default} supported-random-distro$/{centos_8} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 | |
pass | 7115343 | 2022-12-13 23:22:23 | 2022-12-14 08:34:41 | 2022-12-14 08:57:00 | 0:22:19 | 0:13:39 | 0:08:40 | smithi | main | centos | 8.stream | rados/singleton-nomsgr/{all/lazy_omap_stats_output mon_election/connectivity rados supported-random-distro$/{centos_8}} | 1 | |
pass | 7115344 | 2022-12-13 23:22:25 | 2022-12-14 08:36:01 | 2022-12-14 12:04:47 | 3:28:46 | 3:19:33 | 0:09:13 | smithi | main | ubuntu | 20.04 | rados/standalone/{supported-random-distro$/{ubuntu_latest} workloads/osd} | 1 | |
pass | 7115345 | 2022-12-13 23:22:26 | 2022-12-14 08:36:02 | 2022-12-14 09:10:12 | 0:34:10 | 0:24:46 | 0:09:24 | smithi | main | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async objectstore/filestore-xfs rados supported-random-distro$/{ubuntu_latest} thrashers/mapgap thrashosds-health workloads/snaps-few-objects-balanced} | 2 | |
pass | 7115346 | 2022-12-13 23:22:27 | 2022-12-14 08:36:42 | 2022-12-14 08:57:42 | 0:21:00 | 0:14:13 | 0:06:47 | smithi | main | centos | 8.stream | rados/cephadm/smoke-singlehost/{0-random-distro$/{centos_8.stream_container_tools_crun} 1-start 2-services/rgw 3-final} | 1 | |
pass | 7115347 | 2022-12-13 23:22:29 | 2022-12-14 08:36:43 | 2022-12-14 08:55:46 | 0:19:03 | 0:12:40 | 0:06:23 | smithi | main | centos | 8.stream | rados/singleton/{all/mon-config mon_election/classic msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8}} | 1 | |
pass | 7115348 | 2022-12-13 23:22:30 | 2022-12-14 08:36:43 | 2022-12-14 08:56:50 | 0:20:07 | 0:12:56 | 0:07:11 | smithi | main | rhel | 8.6 | rados/multimon/{clusters/21 mon_election/connectivity msgr-failures/many msgr/async-v2only no_pools objectstore/filestore-xfs rados supported-random-distro$/{rhel_8} tasks/mon_clock_with_skews} | 3 | |
pass | 7115349 | 2022-12-13 23:22:32 | 2022-12-14 08:37:14 | 2022-12-14 08:56:29 | 0:19:15 | 0:10:12 | 0:09:03 | smithi | main | ubuntu | 20.04 | rados/perf/{ceph mon_election/connectivity objectstore/bluestore-basic-min-osd-mem-target openstack scheduler/dmclock_default_shards settings/optimized ubuntu_latest workloads/fio_4M_rand_rw} | 1 | |
pass | 7115350 | 2022-12-13 23:22:33 | 2022-12-14 08:37:14 | 2022-12-14 08:59:51 | 0:22:37 | 0:11:46 | 0:10:51 | smithi | main | ubuntu | 20.04 | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/connectivity msgr-failures/osd-delay objectstore/bluestore-stupid rados recovery-overrides/{more-async-recovery} supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 | |
pass | 7115351 | 2022-12-13 23:22:34 | 2022-12-14 08:38:25 | 2022-12-14 09:05:10 | 0:26:45 | 0:19:17 | 0:07:28 | smithi | main | centos | 8.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-stupid rados tasks/rados_cls_all validater/lockdep} | 2 | |
fail | 7115352 | 2022-12-13 23:22:36 | 2022-12-14 08:39:35 | 2022-12-14 09:05:39 | 0:26:04 | 0:15:47 | 0:10:17 | smithi | main | rhel | 8.6 | rados/cephadm/osds/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-ops/rm-zap-flag} | 2 | |
Failure Reason:
"/var/log/ceph/3fd2db7e-7b8d-11ed-8441-001a4aab830c/ceph-mon.smithi163.log:2022-12-14T09:00:32.204+0000 7f7f5e8fe700 0 log_channel(cluster) log [WRN] : Health check failed: Degraded data redundancy: 2/6 objects degraded (33.333%), 1 pg degraded (PG_DEGRADED)" in cluster log |
||||||||||||||
fail | 7115353 | 2022-12-13 23:22:37 | 2022-12-14 08:41:56 | 2022-12-14 09:03:02 | 0:21:06 | 0:11:42 | 0:09:24 | smithi | main | ubuntu | 20.04 | rados/singleton-nomsgr/{all/librados_hello_world mon_election/classic rados supported-random-distro$/{ubuntu_latest}} | 1 | |
Failure Reason:
Command failed (workunit test rados/test_librados_build.sh) on smithi033 with status 2: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=570ede77572a3e5feb912523281e50b9e1e2539f TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_librados_build.sh' |
||||||||||||||
pass | 7115354 | 2022-12-13 23:22:39 | 2022-12-14 08:42:07 | 2022-12-14 09:20:18 | 0:38:11 | 0:30:21 | 0:07:50 | smithi | main | rhel | 8.6 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{default} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{rhel_8} thrashers/morepggrow thrashosds-health workloads/snaps-few-objects-localized} | 2 | |
pass | 7115355 | 2022-12-13 23:22:40 | 2022-12-14 08:43:17 | 2022-12-14 09:21:14 | 0:37:57 | 0:28:35 | 0:09:22 | smithi | main | ubuntu | 20.04 | rados/monthrash/{ceph clusters/9-mons mon_election/connectivity msgr-failures/few msgr/async-v1only objectstore/bluestore-stupid rados supported-random-distro$/{ubuntu_latest} thrashers/sync-many workloads/snaps-few-objects} | 2 | |
pass | 7115356 | 2022-12-13 23:22:41 | 2022-12-14 08:43:38 | 2022-12-14 09:20:29 | 0:36:51 | 0:31:07 | 0:05:44 | smithi | main | rhel | 8.6 | rados/singleton/{all/osd-backfill mon_election/connectivity msgr-failures/none msgr/async-v2only objectstore/bluestore-hybrid rados supported-random-distro$/{rhel_8}} | 1 | |
pass | 7115357 | 2022-12-13 23:22:43 | 2022-12-14 08:43:38 | 2022-12-14 11:41:57 | 2:58:19 | 2:30:33 | 0:27:46 | smithi | main | ubuntu | 20.04 | rados/objectstore/{backends/objectstore-bluestore-b supported-random-distro$/{ubuntu_latest}} | 1 | |
fail | 7115358 | 2022-12-13 23:22:44 | 2022-12-14 08:44:48 | 2022-12-14 09:26:15 | 0:41:27 | 0:34:55 | 0:06:32 | smithi | main | centos | 8.stream | rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/3-size-2-min-size 1-install/nautilus-v2only backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/on mon_election/connectivity msgr-failures/few rados thrashers/pggrow thrashosds-health workloads/cache-snaps} | 3 | |
Failure Reason:
"/var/log/ceph/c6ca4176-7b8d-11ed-8441-001a4aab830c/ceph-mon.a.log:2022-12-14T09:09:59.999+0000 7ff122c05700 0 log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN noscrub,nodeep-scrub flag(s) set" in cluster log |
||||||||||||||
pass | 7115359 | 2022-12-13 23:22:45 | 2022-12-14 08:45:09 | 2022-12-14 09:06:07 | 0:20:58 | 0:12:14 | 0:08:44 | smithi | main | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools_crun agent/on mon_election/connectivity task/test_adoption} | 1 | |
pass | 7115360 | 2022-12-13 23:22:47 | 2022-12-14 08:46:20 | 2022-12-14 09:07:11 | 0:20:51 | 0:15:38 | 0:05:13 | smithi | main | rhel | 8.6 | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/fastclose objectstore/filestore-xfs rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=lrc-k=4-m=2-l=3} | 3 | |
fail | 7115361 | 2022-12-13 23:22:48 | 2022-12-14 08:46:20 | 2022-12-14 09:19:33 | 0:33:13 | 0:27:17 | 0:05:56 | smithi | main | rhel | 8.6 | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{rhel_8} tasks/rados_api_tests} | 2 | |
Failure Reason:
Command failed (workunit test rados/test.sh) on smithi112 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=570ede77572a3e5feb912523281e50b9e1e2539f TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh' |
||||||||||||||
pass | 7115362 | 2022-12-13 23:22:50 | 2022-12-14 08:46:30 | 2022-12-14 09:21:36 | 0:35:06 | 0:27:22 | 0:07:44 | smithi | main | centos | 8.stream | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{default} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-dispatch-delay msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8} thrashers/none thrashosds-health workloads/snaps-few-objects} | 2 | |
pass | 7115363 | 2022-12-13 23:22:51 | 2022-12-14 08:48:21 | 2022-12-14 09:19:05 | 0:30:44 | 0:18:29 | 0:12:15 | smithi | main | ubuntu | 20.04 | rados/singleton-nomsgr/{all/msgr mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 7115364 | 2022-12-13 23:22:52 | 2022-12-14 08:49:22 | 2022-12-14 09:51:39 | 1:02:17 | 0:56:21 | 0:05:56 | smithi | main | centos | 8.stream | rados/singleton/{all/osd-recovery-incomplete mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_8}} | 1 | |
fail | 7115365 | 2022-12-13 23:22:54 | 2022-12-14 08:49:22 | 2022-12-14 09:14:18 | 0:24:56 | 0:16:26 | 0:08:30 | smithi | main | centos | 8.stream | rados/cephadm/smoke/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop agent/off fixed-2 mon_election/connectivity start} | 2 | |
Failure Reason:
"/var/log/ceph/4e6e926c-7b8e-11ed-8441-001a4aab830c/ceph-mon.c.log:2022-12-14T09:06:27.688+0000 7f35cd17d700 7 mon.c@2(synchronizing).log v61 update_from_paxos applying incremental log 60 2022-12-14T09:06:25.706854+0000 mon.a (mon.0) 175 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,b (MON_DOWN)" in cluster log |
||||||||||||||
pass | 7115366 | 2022-12-13 23:22:56 | 2022-12-14 08:51:13 | 2022-12-14 09:29:13 | 0:38:00 | 0:25:10 | 0:12:50 | smithi | main | ubuntu | 20.04 | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-hybrid rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/fastread thrashosds-health workloads/ec-rados-plugin=jerasure-k=2-m=1} | 2 | |
pass | 7115367 | 2022-12-13 23:22:57 | 2022-12-14 08:52:43 | 2022-12-14 09:17:26 | 0:24:43 | 0:19:06 | 0:05:37 | smithi | main | rhel | 8.6 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{default} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{rhel_8} thrashers/pggrow thrashosds-health workloads/write_fadvise_dontneed} | 2 | |
pass | 7115368 | 2022-12-13 23:22:58 | 2022-12-14 08:52:44 | 2022-12-14 09:15:51 | 0:23:07 | 0:15:22 | 0:07:45 | smithi | main | centos | 8.stream | rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/disable mon_election/classic random-objectstore$/{bluestore-bitmap} supported-random-distro$/{centos_8} tasks/failover} | 2 | |
pass | 7115369 | 2022-12-13 23:23:00 | 2022-12-14 08:54:14 | 2022-12-14 09:34:26 | 0:40:12 | 0:32:39 | 0:07:33 | smithi | main | rhel | 8.6 | rados/singleton-nomsgr/{all/multi-backfill-reject mon_election/classic rados supported-random-distro$/{rhel_8}} | 2 | |
pass | 7115370 | 2022-12-13 23:23:01 | 2022-12-14 08:55:15 | 2022-12-14 09:28:40 | 0:33:25 | 0:24:02 | 0:09:23 | smithi | main | ubuntu | 20.04 | rados/singleton/{all/osd-recovery mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-stupid rados supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 7115371 | 2022-12-13 23:23:03 | 2022-12-14 08:55:25 | 2022-12-14 09:14:14 | 0:18:49 | 0:09:54 | 0:08:55 | smithi | main | ubuntu | 20.04 | rados/perf/{ceph mon_election/classic objectstore/bluestore-bitmap openstack scheduler/wpq_default_shards settings/optimized ubuntu_latest workloads/fio_4M_rand_write} | 1 | |
pass | 7115372 | 2022-12-13 23:23:04 | 2022-12-14 08:55:26 | 2022-12-14 09:31:02 | 0:35:36 | 0:29:34 | 0:06:02 | smithi | main | rhel | 8.6 | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/fastclose objectstore/filestore-xfs rados recovery-overrides/{more-async-recovery} supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 | |
pass | 7115373 | 2022-12-13 23:23:06 | 2022-12-14 08:55:46 | 2022-12-14 09:22:47 | 0:27:01 | 0:20:06 | 0:06:55 | smithi | main | rhel | 8.6 | rados/cephadm/workunits/{0-distro/rhel_8.6_container_tools_3.0 agent/off mon_election/classic task/test_cephadm} | 1 | |
pass | 7115374 | 2022-12-13 23:23:07 | 2022-12-14 08:55:56 | 2022-12-14 09:26:46 | 0:30:50 | 0:22:28 | 0:08:22 | smithi | main | centos | 8.stream | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{default} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/admin_socket_objecter_requests} | 2 | |
pass | 7115375 | 2022-12-13 23:23:08 | 2022-12-14 08:56:57 | 2022-12-14 09:18:36 | 0:21:39 | 0:15:43 | 0:05:56 | smithi | main | rhel | 8.6 | rados/multimon/{clusters/3 mon_election/classic msgr-failures/few msgr/async no_pools objectstore/bluestore-bitmap rados supported-random-distro$/{rhel_8} tasks/mon_recovery} | 2 | |
pass | 7115376 | 2022-12-13 23:23:10 | 2022-12-14 08:56:57 | 2022-12-14 09:14:37 | 0:17:40 | 0:08:31 | 0:09:09 | smithi | main | ubuntu | 20.04 | rados/singleton/{all/peer mon_election/classic msgr-failures/none msgr/async-v2only objectstore/filestore-xfs rados supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 7115377 | 2022-12-13 23:23:11 | 2022-12-14 08:57:08 | 2022-12-14 09:25:17 | 0:28:09 | 0:18:16 | 0:09:53 | smithi | main | ubuntu | 20.04 | rados/singleton-nomsgr/{all/osd_stale_reads mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} | 1 |