User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
ozeneva | 2021-09-14 18:00:14 | 2021-09-14 18:02:10 | 2021-09-15 13:46:37 | 19:44:27 | rados | wip-omri-tracer | smithi | e6b3c32 | 100 | 73 | 46 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
pass | 6389850 | 2021-09-14 18:01:25 | 2021-09-14 18:02:10 | 2021-09-14 18:32:40 | 0:30:30 | 0:18:00 | 0:12:30 | smithi | master | ubuntu | 20.04 | rados/cephadm/smoke/{distro/ubuntu_20.04 fixed-2 mon_election/connectivity start} | 2 | |
dead | 6389851 | 2021-09-14 18:01:26 | 2021-09-14 18:02:10 | 2021-09-15 06:10:45 | 12:08:35 | smithi | master | rhel | 8.4 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/osd-dispatch-delay rados recovery-overrides/{more-active-recovery} supported-random-distro$/{rhel_8} thrashers/default thrashosds-health workloads/ec-snaps-few-objects-overwrites} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6389852 | 2021-09-14 18:01:27 | 2021-09-14 18:02:10 | 2021-09-14 18:22:31 | 0:20:21 | 0:13:10 | 0:07:11 | smithi | master | centos | 8.2 | rados/cephadm/workunits/{0-distro/centos_8.2_container_tools_3.0 mon_election/connectivity task/test_orch_cli} | 1 | |
fail | 6389853 | 2021-09-14 18:01:28 | 2021-09-14 18:02:11 | 2021-09-14 18:32:23 | 0:30:12 | 0:16:58 | 0:13:14 | smithi | master | centos | 8.3 | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-lz4 rados tasks/mon_recovery validater/lockdep} | 2 | |
Failure Reason:
"2021-09-14T18:30:42.213059+0000 mon.a (mon.0) 499 : cluster [WRN] Health check failed: 1 daemons have recently crashed (RECENT_CRASH)" in cluster log |
||||||||||||||
fail | 6389854 | 2021-09-14 18:01:29 | 2021-09-14 18:02:11 | 2021-09-14 21:26:21 | 3:24:10 | 3:11:12 | 0:12:58 | smithi | master | centos | 8.3 | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8} tasks/rados_api_tests} | 2 | |
Failure Reason:
Command failed (workunit test rados/test.sh) on smithi078 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=e6b3c32ff8a98061f42429a5a24ed6b6da1c6ea7 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh' |
||||||||||||||
pass | 6389855 | 2021-09-14 18:01:30 | 2021-09-14 18:02:11 | 2021-09-14 18:22:01 | 0:19:50 | 0:09:11 | 0:10:39 | smithi | master | centos | 8.3 | rados/multimon/{clusters/6 mon_election/classic msgr-failures/few msgr/async-v2only no_pools objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8} tasks/mon_clock_with_skews} | 2 | |
dead | 6389856 | 2021-09-14 18:01:31 | 2021-09-14 18:02:11 | 2021-09-15 06:13:52 | 12:11:41 | smithi | master | centos | 8.3 | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/osd-delay objectstore/bluestore-comp-lz4 rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{centos_8} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} | 3 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
dead | 6389857 | 2021-09-14 18:01:31 | 2021-09-14 18:02:11 | 2021-09-15 06:14:15 | 12:12:04 | smithi | master | ubuntu | 20.04 | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/osd-delay objectstore/bluestore-comp-lz4 rados recovery-overrides/{more-async-recovery} supported-random-distro$/{ubuntu_latest} thrashers/none thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6389858 | 2021-09-14 18:01:32 | 2021-09-14 18:02:12 | 2021-09-14 18:30:23 | 0:28:11 | 0:21:09 | 0:07:02 | smithi | master | rhel | 8.4 | rados/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_3.0 1-start 2-services/nfs-ingress 3-final} | 2 | |
fail | 6389859 | 2021-09-14 18:01:33 | 2021-09-14 18:02:12 | 2021-09-14 18:49:21 | 0:47:09 | 0:35:26 | 0:11:43 | smithi | master | centos | 8.3 | rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8} thrashers/force-sync-many workloads/rados_mon_workunits} | 2 | |
Failure Reason:
"2021-09-14T18:40:56.380719+0000 mon.a (mon.0) 766 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
dead | 6389860 | 2021-09-14 18:01:34 | 2021-09-14 18:02:12 | 2021-09-15 06:14:04 | 12:11:52 | smithi | master | centos | 8.3 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{default} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-dispatch-delay msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/cache-snaps} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 6389861 | 2021-09-14 18:01:35 | 2021-09-14 18:02:12 | 2021-09-14 21:21:41 | 3:19:29 | 3:08:34 | 0:10:55 | smithi | master | ubuntu | 20.04 | rados/singleton/{all/deduptool mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_latest}} | 1 | |
Failure Reason:
Command failed (workunit test rados/test_dedup_tool.sh) on smithi098 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=e6b3c32ff8a98061f42429a5a24ed6b6da1c6ea7 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_dedup_tool.sh' |
||||||||||||||
pass | 6389862 | 2021-09-14 18:01:36 | 2021-09-14 18:02:13 | 2021-09-14 18:40:53 | 0:38:40 | 0:22:53 | 0:15:47 | smithi | master | centos | 8.3 | rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/nautilus backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{centos_latest} mon_election/connectivity msgr-failures/few rados thrashers/default thrashosds-health workloads/rbd_cls} | 3 | |
dead | 6389863 | 2021-09-14 18:01:37 | 2021-09-14 18:02:13 | 2021-09-15 06:13:47 | 12:11:34 | smithi | master | ubuntu | 20.04 | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/osd-delay objectstore/bluestore-comp-lz4 rados recovery-overrides/{default} supported-random-distro$/{ubuntu_latest} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=jerasure-k=3-m=1} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 6389864 | 2021-09-14 18:01:38 | 2021-09-14 18:02:13 | 2021-09-14 18:32:30 | 0:30:17 | 0:20:27 | 0:09:50 | smithi | master | ubuntu | 20.04 | rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 1-rook 2-workload/radosbench 3-final cluster/1-node k8s/1.21 net/calico rook/1.7.0} | 1 | |
Failure Reason:
'wait for operator' reached maximum tries (90) after waiting for 900 seconds |
||||||||||||||
fail | 6389865 | 2021-09-14 18:01:39 | 2021-09-14 18:02:15 | 2021-09-14 18:59:23 | 0:57:08 | 0:43:29 | 0:13:39 | smithi | master | centos | 8.2 | rados/dashboard/{centos_8.2_container_tools_3.0 debug/mgr mon_election/classic random-objectstore$/{bluestore-hybrid} tasks/dashboard} | 2 | |
Failure Reason:
Test failure: test_create_access_permissions (tasks.mgr.dashboard.test_pool.PoolTest) |
||||||||||||||
fail | 6389866 | 2021-09-14 18:01:40 | 2021-09-14 18:02:15 | 2021-09-14 18:37:37 | 0:35:22 | 0:26:22 | 0:09:00 | smithi | master | rhel | 8.4 | rados/objectstore/{backends/alloc-hint supported-random-distro$/{rhel_8}} | 1 | |
Failure Reason:
"2021-09-14T18:28:54.747575+0000 mon.a (mon.0) 195 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
fail | 6389867 | 2021-09-14 18:01:40 | 2021-09-14 18:02:15 | 2021-09-14 18:36:31 | 0:34:16 | 0:26:16 | 0:08:00 | smithi | master | rhel | 8.4 | rados/rest/{mgr-restful supported-random-distro$/{rhel_8}} | 1 | |
Failure Reason:
Command failed on smithi143 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg deep-scrub 4.34' |
||||||||||||||
fail | 6389868 | 2021-09-14 18:01:41 | 2021-09-14 18:02:15 | 2021-09-14 18:34:44 | 0:32:29 | 0:26:47 | 0:05:42 | smithi | master | rhel | 8.4 | rados/singleton-nomsgr/{all/admin_socket_output mon_election/classic rados supported-random-distro$/{rhel_8}} | 1 | |
Failure Reason:
"2021-09-14T18:26:17.416705+0000 mon.a (mon.0) 203 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
pass | 6389869 | 2021-09-14 18:01:42 | 2021-09-14 18:02:15 | 2021-09-14 18:21:26 | 0:19:11 | 0:11:05 | 0:08:06 | smithi | master | ubuntu | 20.04 | rados/standalone/{supported-random-distro$/{ubuntu_latest} workloads/c2c} | 1 | |
fail | 6389870 | 2021-09-14 18:01:43 | 2021-09-14 18:02:16 | 2021-09-14 18:52:51 | 0:50:35 | 0:36:29 | 0:14:06 | smithi | master | ubuntu | 20.04 | rados/upgrade/parallel/{0-distro$/{ubuntu_20.04} 0-start 1-tasks mon_election/classic upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} | 2 | |
Failure Reason:
Command failed (workunit test cls/test_cls_cmpomap.sh) on smithi008 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=pacific TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_cmpomap.sh' |
||||||||||||||
pass | 6389871 | 2021-09-14 18:01:44 | 2021-09-14 18:02:17 | 2021-09-14 18:38:54 | 0:36:37 | 0:28:13 | 0:08:24 | smithi | master | centos | 8.3 | rados/valgrind-leaks/{1-start 2-inject-leak/mon centos_latest} | 1 | |
fail | 6389872 | 2021-09-14 18:01:45 | 2021-09-14 18:02:17 | 2021-09-14 19:55:29 | 1:53:12 | 1:41:39 | 0:11:33 | smithi | master | centos | 8.2 | rados/cephadm/thrash/{0-distro/centos_8.2_container_tools_3.0 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async-v1only root} | 2 | |
Failure Reason:
wait_for_clean: failed before timeout expired |
||||||||||||||
fail | 6389873 | 2021-09-14 18:01:46 | 2021-09-14 18:02:18 | 2021-09-14 21:40:07 | 3:37:49 | 3:30:17 | 0:07:32 | smithi | master | rhel | 8.4 | rados/cephadm/with-work/{0-distro/rhel_8.4_container_tools_3.0 fixed-2 mode/packaged mon_election/classic msgr/async-v1only start tasks/rados_api_tests} | 2 | |
Failure Reason:
Command failed (workunit test rados/test.sh) on smithi031 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=e6b3c32ff8a98061f42429a5a24ed6b6da1c6ea7 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh' |
||||||||||||||
fail | 6389874 | 2021-09-14 18:01:47 | 2021-09-14 18:02:18 | 2021-09-14 18:42:55 | 0:40:37 | 0:28:44 | 0:11:53 | smithi | master | centos | 8.2 | rados/cephadm/mgr-nfs-upgrade/{0-centos_8.2_container_tools_3.0 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |
Failure Reason:
Found coredumps on ubuntu@smithi049.front.sepia.ceph.com |
||||||||||||||
fail | 6389875 | 2021-09-14 18:01:48 | 2021-09-14 18:02:18 | 2021-09-14 18:30:58 | 0:28:40 | 0:15:22 | 0:13:18 | smithi | master | ubuntu | 20.04 | rados/singleton/{all/divergent_priors mon_election/connectivity msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{ubuntu_latest}} | 1 | |
Failure Reason:
Command failed on smithi151 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json' |
||||||||||||||
pass | 6389876 | 2021-09-14 18:01:48 | 2021-09-14 18:02:18 | 2021-09-14 18:20:13 | 0:17:55 | 0:07:58 | 0:09:57 | smithi | master | ubuntu | 20.04 | rados/singleton-nomsgr/{all/balancer mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} | 1 | |
dead | 6389877 | 2021-09-14 18:01:49 | 2021-09-14 18:02:19 | 2021-09-15 06:14:45 | 12:12:26 | smithi | master | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{default} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/fastclose msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{ubuntu_latest} thrashers/mapgap thrashosds-health workloads/cache} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6389878 | 2021-09-14 18:01:50 | 2021-09-14 18:02:19 | 2021-09-14 18:31:22 | 0:29:03 | 0:16:15 | 0:12:48 | smithi | master | centos | 8.3 | rados/cephadm/orchestrator_cli/{0-random-distro$/{centos_8.3_container_tools_3.0} 2-node-mgr orchestrator_cli} | 2 | |
pass | 6389879 | 2021-09-14 18:01:51 | 2021-09-14 18:02:19 | 2021-09-14 18:23:43 | 0:21:24 | 0:09:36 | 0:11:48 | smithi | master | ubuntu | 20.04 | rados/perf/{ceph mon_election/connectivity objectstore/bluestore-basic-min-osd-mem-target openstack scheduler/dmclock_1Shard_16Threads settings/optimized ubuntu_latest workloads/fio_4M_rand_read} | 1 | |
pass | 6389880 | 2021-09-14 18:01:52 | 2021-09-14 18:02:20 | 2021-09-14 18:28:38 | 0:26:18 | 0:14:42 | 0:11:36 | smithi | master | centos | 8.2 | rados/cephadm/smoke/{distro/centos_8.2_container_tools_3.0 fixed-2 mon_election/classic start} | 2 | |
pass | 6389881 | 2021-09-14 18:01:53 | 2021-09-14 18:02:20 | 2021-09-14 18:36:40 | 0:34:20 | 0:23:23 | 0:10:57 | smithi | master | centos | 8.stream | rados/mgr/{clusters/{2-node-mgr} debug/mgr mon_election/connectivity objectstore/bluestore-comp-snappy supported-random-distro$/{centos_8.stream} tasks/progress} | 2 | |
pass | 6389882 | 2021-09-14 18:01:54 | 2021-09-14 18:02:21 | 2021-09-14 18:22:53 | 0:20:32 | 0:11:21 | 0:09:11 | smithi | master | centos | 8.2 | rados/cephadm/smoke-singlehost/{0-distro$/{centos_8.2_container_tools_3.0} 1-start 2-services/basic 3-final} | 1 | |
fail | 6389883 | 2021-09-14 18:01:55 | 2021-09-14 18:02:21 | 2021-09-14 18:36:59 | 0:34:38 | 0:27:47 | 0:06:51 | smithi | master | rhel | 8.4 | rados/singleton/{all/divergent_priors2 mon_election/classic msgr-failures/none msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{rhel_8}} | 1 | |
Failure Reason:
Command failed on smithi045 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json' |
||||||||||||||
fail | 6389884 | 2021-09-14 18:01:55 | 2021-09-14 18:02:21 | 2021-09-14 18:38:07 | 0:35:46 | 0:25:14 | 0:10:32 | smithi | master | rhel | 8.4 | rados/singleton-nomsgr/{all/cache-fs-trunc mon_election/classic rados supported-random-distro$/{rhel_8}} | 1 | |
Failure Reason:
Command failed on smithi153 with status 1: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'dd if=/dev/urandom of=$TESTDIR/mnt.0/foo bs=1M count=5'" |
||||||||||||||
pass | 6389885 | 2021-09-14 18:01:56 | 2021-09-14 18:02:22 | 2021-09-14 18:45:35 | 0:43:13 | 0:30:06 | 0:13:07 | smithi | master | ubuntu | 20.04 | rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04 2-repo_digest/repo_digest 3-start-upgrade 4-wait mon_election/classic} | 2 | |
dead | 6389886 | 2021-09-14 18:01:57 | 2021-09-14 18:02:23 | 2021-09-15 06:13:45 | 12:11:22 | smithi | master | centos | 8.stream | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{default} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/few msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_8.stream} thrashers/morepggrow thrashosds-health workloads/dedup-io-mixed} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6389887 | 2021-09-14 18:01:58 | 2021-09-14 18:02:23 | 2021-09-14 18:20:50 | 0:18:27 | 0:09:50 | 0:08:37 | smithi | master | centos | 8.2 | rados/cephadm/workunits/{0-distro/centos_8.2_container_tools_3.0 mon_election/classic task/test_adoption} | 1 | |
fail | 6389888 | 2021-09-14 18:01:59 | 2021-09-14 18:02:23 | 2021-09-14 21:34:32 | 3:32:09 | 3:17:21 | 0:14:48 | smithi | master | ubuntu | 20.04 | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{ubuntu_latest} tasks/rados_cls_all} | 2 | |
Failure Reason:
Command failed (workunit test cls/test_cls_rgw.sh) on smithi099 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=e6b3c32ff8a98061f42429a5a24ed6b6da1c6ea7 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_rgw.sh' |
||||||||||||||
dead | 6389889 | 2021-09-14 18:02:00 | 2021-09-14 18:02:23 | 2021-09-15 06:13:58 | 12:11:35 | smithi | master | centos | 8.stream | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-comp-snappy rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{centos_8.stream} thrashers/default thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6389890 | 2021-09-14 18:02:01 | 2021-09-14 18:02:25 | 2021-09-14 18:31:38 | 0:29:13 | 0:23:05 | 0:06:08 | smithi | master | rhel | 8.4 | rados/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_rhel8 1-start 2-services/nfs-ingress2 3-final} | 2 | |
pass | 6389891 | 2021-09-14 18:02:02 | 2021-09-14 18:02:25 | 2021-09-14 18:22:29 | 0:20:04 | 0:10:35 | 0:09:29 | smithi | master | centos | 8.3 | rados/singleton/{all/dump-stuck mon_election/connectivity msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_8}} | 1 | |
pass | 6389892 | 2021-09-14 18:02:03 | 2021-09-14 18:02:25 | 2021-09-14 18:29:12 | 0:26:47 | 0:20:22 | 0:06:25 | smithi | master | rhel | 8.4 | rados/singleton-nomsgr/{all/ceph-kvstore-tool mon_election/connectivity rados supported-random-distro$/{rhel_8}} | 1 | |
pass | 6389893 | 2021-09-14 18:02:04 | 2021-09-14 18:02:25 | 2021-09-14 18:29:06 | 0:26:41 | 0:15:30 | 0:11:11 | smithi | master | centos | 8.3 | rados/cephadm/smoke/{distro/centos_8.3_container_tools_3.0 fixed-2 mon_election/connectivity start} | 2 | |
fail | 6389894 | 2021-09-14 18:02:05 | 2021-09-14 18:02:25 | 2021-09-15 00:46:25 | 6:44:00 | 6:30:41 | 0:13:19 | smithi | master | centos | 8.3 | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-snappy rados tasks/rados_api_tests validater/valgrind} | 2 | |
Failure Reason:
Command failed (workunit test rados/test.sh) on smithi124 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=e6b3c32ff8a98061f42429a5a24ed6b6da1c6ea7 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 ALLOW_TIMEOUTS=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh' |
||||||||||||||
fail | 6389895 | 2021-09-14 18:02:05 | 2021-09-14 18:02:26 | 2021-09-14 18:41:25 | 0:38:59 | 0:30:15 | 0:08:44 | smithi | master | rhel | 8.4 | rados/objectstore/{backends/ceph_objectstore_tool supported-random-distro$/{rhel_8}} | 1 | |
Failure Reason:
Command failed on smithi100 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json' |
||||||||||||||
dead | 6389896 | 2021-09-14 18:02:06 | 2021-09-14 18:02:26 | 2021-09-15 06:14:24 | 12:11:58 | smithi | master | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-delay msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{ubuntu_latest} thrashers/none thrashosds-health workloads/dedup-io-snaps} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6389897 | 2021-09-14 18:02:07 | 2021-09-14 18:02:27 | 2021-09-14 18:34:20 | 0:31:53 | 0:17:57 | 0:13:56 | smithi | master | centos | 8.stream | rados/multimon/{clusters/9 mon_election/connectivity msgr-failures/many msgr/async no_pools objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8.stream} tasks/mon_recovery} | 3 | |
pass | 6389898 | 2021-09-14 18:02:08 | 2021-09-14 18:02:27 | 2021-09-14 18:28:00 | 0:25:33 | 0:15:43 | 0:09:50 | smithi | master | centos | 8.2 | rados/cephadm/workunits/{0-distro/centos_8.2_container_tools_3.0 mon_election/connectivity task/test_cephadm} | 1 | |
dead | 6389899 | 2021-09-14 18:02:09 | 2021-09-14 18:02:28 | 2021-09-15 06:13:56 | 12:11:28 | smithi | master | rhel | 8.4 | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-comp-snappy rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{rhel_8} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=lrc-k=4-m=2-l=3} | 3 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
dead | 6389900 | 2021-09-14 18:02:11 | 2021-09-14 18:02:28 | 2021-09-15 06:13:35 | 12:11:07 | smithi | master | ubuntu | 20.04 | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-comp-snappy rados recovery-overrides/{more-async-recovery} supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
dead | 6389901 | 2021-09-14 18:02:20 | 2021-09-14 18:02:28 | 2021-09-15 06:13:54 | 12:11:26 | smithi | master | ubuntu | 20.04 | rados/monthrash/{ceph clusters/9-mons mon_election/connectivity msgr-failures/mon-delay msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest} thrashers/many workloads/snaps-few-objects} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 6389902 | 2021-09-14 18:02:21 | 2021-09-14 18:02:28 | 2021-09-14 19:06:28 | 1:04:00 | 0:48:45 | 0:15:15 | smithi | master | ubuntu | 20.04 | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-comp-snappy rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/ec-radosbench} | 2 | |
Failure Reason:
Command failed on smithi170 with status 11: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg deep-scrub 1.43' |
||||||||||||||
pass | 6389903 | 2021-09-14 18:02:22 | 2021-09-14 18:02:30 | 2021-09-14 18:31:05 | 0:28:35 | 0:17:25 | 0:11:10 | smithi | master | ubuntu | 20.04 | rados/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 1-start 2-services/nfs 3-final} | 2 | |
fail | 6389904 | 2021-09-14 18:02:24 | 2021-09-14 18:02:30 | 2021-09-14 18:38:36 | 0:36:06 | 0:22:19 | 0:13:47 | smithi | master | ubuntu | 20.04 | rados/singleton/{all/ec-lost-unfound mon_election/classic msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{ubuntu_latest}} | 1 | |
Failure Reason:
Command failed on smithi183 with status 2: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg 1.e mark_unfound_lost delete' |
||||||||||||||
pass | 6389905 | 2021-09-14 18:02:25 | 2021-09-14 18:02:30 | 2021-09-14 18:23:45 | 0:21:15 | 0:10:04 | 0:11:11 | smithi | master | ubuntu | 20.04 | rados/perf/{ceph mon_election/classic objectstore/bluestore-bitmap openstack scheduler/dmclock_default_shards settings/optimized ubuntu_latest workloads/fio_4M_rand_rw} | 1 | |
pass | 6389906 | 2021-09-14 18:02:26 | 2021-09-14 18:02:30 | 2021-09-14 18:24:30 | 0:22:00 | 0:09:17 | 0:12:43 | smithi | master | centos | 8.3 | rados/singleton-nomsgr/{all/ceph-post-file mon_election/classic rados supported-random-distro$/{centos_8}} | 1 | |
pass | 6389907 | 2021-09-14 18:02:27 | 2021-09-14 18:02:31 | 2021-09-14 18:31:10 | 0:28:39 | 0:20:43 | 0:07:56 | smithi | master | centos | 8.3 | rados/standalone/{supported-random-distro$/{centos_8} workloads/crush} | 1 | |
fail | 6389908 | 2021-09-14 18:02:28 | 2021-09-14 18:02:31 | 2021-09-14 19:20:31 | 1:18:00 | 1:06:08 | 0:11:52 | smithi | master | centos | 8.2 | rados/cephadm/thrash/{0-distro/centos_8.2_container_tools_3.0 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async-v2only root} | 2 | |
Failure Reason:
Command failed on smithi149 with status 6: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e6b3c32ff8a98061f42429a5a24ed6b6da1c6ea7 shell --fsid d47c8fd4-1588-11ec-8c25-001a4aab830c -- ceph tell osd.5 flush_pg_stats' |
||||||||||||||
pass | 6389909 | 2021-09-14 18:02:29 | 2021-09-14 18:03:51 | 2021-09-14 18:46:31 | 0:42:40 | 0:34:30 | 0:08:10 | smithi | master | rhel | 8.4 | rados/cephadm/with-work/{0-distro/rhel_8.4_container_tools_rhel8 fixed-2 mode/root mon_election/connectivity msgr/async-v2only start tasks/rados_python} | 2 | |
dead | 6389910 | 2021-09-14 18:02:31 | 2021-09-14 18:03:51 | 2021-09-15 06:15:22 | 12:11:31 | smithi | master | ubuntu | 20.04 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/fastclose rados recovery-overrides/{more-active-recovery} supported-random-distro$/{ubuntu_latest} thrashers/fastread thrashosds-health workloads/ec-pool-snaps-few-objects-overwrites} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
dead | 6389911 | 2021-09-14 18:02:32 | 2021-09-14 18:04:12 | 2021-09-15 06:14:32 | 12:10:20 | smithi | master | centos | 8.3 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-dispatch-delay msgr/async objectstore/filestore-xfs rados supported-random-distro$/{centos_8} thrashers/pggrow thrashosds-health workloads/pool-snaps-few-objects} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6389912 | 2021-09-14 18:02:33 | 2021-09-14 18:05:02 | 2021-09-14 18:32:47 | 0:27:45 | 0:14:18 | 0:13:27 | smithi | master | centos | 8.2 | rados/cephadm/smoke-roleless/{0-distro/centos_8.2_container_tools_3.0 1-start 2-services/nfs2 3-final} | 2 | |
pass | 6389913 | 2021-09-14 18:02:34 | 2021-09-14 18:07:52 | 2021-09-14 18:26:29 | 0:18:37 | 0:09:07 | 0:09:30 | smithi | master | centos | 8.3 | rados/singleton/{all/erasure-code-nonregression mon_election/connectivity msgr-failures/none msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{centos_8}} | 1 | |
pass | 6389914 | 2021-09-14 18:02:35 | 2021-09-14 18:07:53 | 2021-09-14 18:25:30 | 0:17:37 | 0:07:47 | 0:09:50 | smithi | master | ubuntu | 20.04 | rados/singleton-nomsgr/{all/crushdiff mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 6389915 | 2021-09-14 18:02:36 | 2021-09-14 18:07:53 | 2021-09-14 18:32:48 | 0:24:55 | 0:18:51 | 0:06:04 | smithi | master | rhel | 8.4 | rados/cephadm/smoke/{distro/rhel_8.4_container_tools_3.0 fixed-2 mon_election/classic start} | 2 | |
dead | 6389916 | 2021-09-14 18:02:37 | 2021-09-14 18:08:03 | 2021-09-15 06:16:53 | 12:08:50 | smithi | master | centos | 8.3 | rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/octopus backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{centos_latest} mon_election/classic msgr-failures/osd-delay rados thrashers/mapgap thrashosds-health workloads/snaps-few-objects} | 3 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6389917 | 2021-09-14 18:02:38 | 2021-09-14 18:08:03 | 2021-09-14 18:33:40 | 0:25:37 | 0:14:21 | 0:11:16 | smithi | master | centos | 8.stream | rados/mgr/{clusters/{2-node-mgr} debug/mgr mon_election/classic objectstore/bluestore-comp-zlib supported-random-distro$/{centos_8.stream} tasks/prometheus} | 2 | |
fail | 6389918 | 2021-09-14 18:02:39 | 2021-09-14 18:08:34 | 2021-09-14 18:34:13 | 0:25:39 | 0:14:35 | 0:11:04 | smithi | master | ubuntu | 20.04 | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{ubuntu_latest} tasks/rados_python} | 2 | |
Failure Reason:
Command failed on smithi041 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json' |
||||||||||||||
pass | 6389919 | 2021-09-14 18:02:39 | 2021-09-14 18:09:04 | 2021-09-14 18:24:00 | 0:14:56 | 0:06:26 | 0:08:30 | smithi | master | centos | 8.2 | rados/cephadm/workunits/{0-distro/centos_8.2_container_tools_3.0 mon_election/classic task/test_cephadm_repos} | 1 | |
pass | 6389920 | 2021-09-14 18:02:40 | 2021-09-14 18:09:04 | 2021-09-14 18:45:54 | 0:36:50 | 0:28:09 | 0:08:41 | smithi | master | centos | 8.3 | rados/cephadm/upgrade/{1-start-distro/1-start-centos_8.3-octopus 2-repo_digest/defaut 3-start-upgrade 4-wait mon_election/connectivity} | 2 | |
fail | 6389921 | 2021-09-14 18:02:41 | 2021-09-14 18:09:14 | 2021-09-14 21:36:59 | 3:27:45 | 3:15:30 | 0:12:15 | smithi | master | centos | 8.3 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{default} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/fastclose msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/rados_api_tests} | 2 | |
Failure Reason:
Command failed (workunit test rados/test.sh) on smithi158 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=e6b3c32ff8a98061f42429a5a24ed6b6da1c6ea7 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh' |
||||||||||||||
fail | 6389922 | 2021-09-14 18:02:42 | 2021-09-14 18:09:15 | 2021-09-14 18:44:13 | 0:34:58 | 0:24:47 | 0:10:11 | smithi | master | centos | 8.stream | rados/singleton/{all/lost-unfound-delete mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_8.stream}} | 1 | |
fail | 6389923 | 2021-09-14 18:02:43 | 2021-09-14 18:09:15 | 2021-09-14 18:28:01 | 0:18:46 | 0:09:03 | 0:09:43 | smithi | master | centos | 8.3 | rados/cephadm/smoke-roleless/{0-distro/centos_8.3_container_tools_3.0 1-start 2-services/rgw-ingress 3-final} | 2 | |
Failure Reason:
Command failed on smithi091 with status 127: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e6b3c32ff8a98061f42429a5a24ed6b6da1c6ea7 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e6e3064e-1588-11ec-8c25-001a4aab830c -- ceph mon dump -f json' |
||||||||||||||
dead | 6389924 | 2021-09-14 18:02:44 | 2021-09-14 18:09:15 | 2021-09-15 06:28:04 | 12:18:49 | smithi | master | centos | 8.3 | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/fastclose objectstore/bluestore-comp-zlib rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
dead | 6389925 | 2021-09-14 18:02:45 | 2021-09-14 18:18:48 | 2021-09-15 06:28:48 | 12:10:00 | smithi | master | centos | 8.stream | rados/singleton-nomsgr/{all/export-after-evict mon_election/classic rados supported-random-distro$/{centos_8.stream}} | 1 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6389926 | 2021-09-14 18:02:46 | 2021-09-14 18:20:19 | 2021-09-14 18:46:22 | 0:26:03 | 0:18:58 | 0:07:05 | smithi | master | rhel | 8.4 | rados/cephadm/smoke/{distro/rhel_8.4_container_tools_rhel8 fixed-2 mon_election/connectivity start} | 2 | |
pass | 6389927 | 2021-09-14 18:02:47 | 2021-09-14 18:21:30 | 2021-09-14 18:45:32 | 0:24:02 | 0:13:51 | 0:10:11 | smithi | master | ubuntu | 20.04 | rados/perf/{ceph mon_election/connectivity objectstore/bluestore-comp openstack scheduler/wpq_default_shards settings/optimized ubuntu_latest workloads/fio_4M_rand_write} | 1 | |
fail | 6389928 | 2021-09-14 18:02:48 | 2021-09-14 18:22:11 | 2021-09-14 18:49:08 | 0:26:57 | 0:17:08 | 0:09:49 | smithi | master | centos | 8.stream | rados/objectstore/{backends/filejournal supported-random-distro$/{centos_8.stream}} | 1 | |
Failure Reason:
"2021-09-14T18:39:43.343458+0000 mon.a (mon.0) 87 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
pass | 6389929 | 2021-09-14 18:02:49 | 2021-09-14 18:22:11 | 2021-09-14 18:53:01 | 0:30:50 | 0:21:54 | 0:08:56 | smithi | master | centos | 8.2 | rados/cephadm/workunits/{0-distro/centos_8.2_container_tools_3.0 mon_election/connectivity task/test_nfs} | 1 | |
fail | 6389930 | 2021-09-14 18:02:50 | 2021-09-14 18:22:31 | 2021-09-14 19:00:05 | 0:37:34 | 0:25:34 | 0:12:00 | smithi | master | centos | 8.3 | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-zlib rados tasks/rados_cls_all validater/lockdep} | 2 | |
Failure Reason:
Command failed on smithi087 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd dump --format=json' |
||||||||||||||
fail | 6389931 | 2021-09-14 18:02:50 | 2021-09-14 18:23:02 | 2021-09-14 19:51:54 | 1:28:52 | 1:16:35 | 0:12:17 | smithi | master | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{default} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/radosbench-high-concurrency} | 2 | |
Failure Reason:
reached maximum tries (500) after waiting for 3000 seconds |
||||||||||||||
fail | 6389932 | 2021-09-14 18:02:51 | 2021-09-14 18:23:52 | 2021-09-14 18:59:15 | 0:35:23 | 0:23:51 | 0:11:32 | smithi | master | centos | 8.stream | rados/singleton/{all/lost-unfound mon_election/connectivity msgr-failures/many msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{centos_8.stream}} | 1 | |
pass | 6389933 | 2021-09-14 18:02:52 | 2021-09-14 18:24:02 | 2021-09-14 18:52:50 | 0:28:48 | 0:21:00 | 0:07:48 | smithi | master | rhel | 8.4 | rados/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_3.0 1-start 2-services/rgw 3-final} | 2 | |
pass | 6389934 | 2021-09-14 18:02:53 | 2021-09-14 18:25:33 | 2021-09-14 18:48:57 | 0:23:24 | 0:09:53 | 0:13:31 | smithi | master | centos | 8.stream | rados/multimon/{clusters/21 mon_election/classic msgr-failures/few msgr/async-v1only no_pools objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_8.stream} tasks/mon_clock_no_skews} | 3 | |
dead | 6389935 | 2021-09-14 18:02:54 | 2021-09-14 18:28:03 | 2021-09-15 06:37:53 | 12:09:50 | smithi | master | rhel | 8.4 | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/fastclose objectstore/bluestore-comp-zlib rados recovery-overrides/{more-async-recovery} supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} | 3 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
dead | 6389936 | 2021-09-14 18:02:55 | 2021-09-14 18:28:43 | 2021-09-15 06:38:15 | 12:09:32 | smithi | master | centos | 8.stream | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/fastclose objectstore/bluestore-comp-zlib rados recovery-overrides/{more-async-recovery} supported-random-distro$/{centos_8.stream} thrashers/careful thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 6389937 | 2021-09-14 18:02:56 | 2021-09-14 18:29:14 | 2021-09-14 18:54:02 | 0:24:48 | 0:14:53 | 0:09:55 | smithi | master | centos | 8.3 | rados/singleton-nomsgr/{all/full-tiering mon_election/connectivity rados supported-random-distro$/{centos_8}} | 1 | |
Failure Reason:
"2021-09-14T18:44:51.639365+0000 mon.a (mon.0) 131 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
fail | 6389938 | 2021-09-14 18:02:57 | 2021-09-14 18:29:14 | 2021-09-14 19:06:11 | 0:36:57 | 0:25:12 | 0:11:45 | smithi | master | centos | 8.3 | rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_8} thrashers/one workloads/pool-create-delete} | 2 | |
Failure Reason:
"2021-09-14T18:57:29.950708+0000 mon.a (mon.0) 1026 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
dead | 6389939 | 2021-09-14 18:02:58 | 2021-09-14 18:30:24 | 2021-09-15 06:40:02 | 12:09:38 | smithi | master | centos | 8.3 | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/fastclose objectstore/bluestore-comp-zlib rados recovery-overrides/{default} supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/ec-small-objects-balanced} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 6389940 | 2021-09-14 18:02:58 | 2021-09-14 18:31:15 | 2021-09-14 19:42:28 | 1:11:13 | 1:00:36 | 0:10:37 | smithi | master | centos | 8.2 | rados/cephadm/thrash/{0-distro/centos_8.2_container_tools_3.0 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async root} | 2 | |
Failure Reason:
wait_for_clean: failed before timeout expired |
||||||||||||||
fail | 6389941 | 2021-09-14 18:02:59 | 2021-09-14 18:31:15 | 2021-09-14 21:52:57 | 3:21:42 | 3:10:38 | 0:11:04 | smithi | master | centos | 8.3 | rados/singleton-bluestore/{all/cephtool mon_election/connectivity msgr-failures/many msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8}} | 1 | |
Failure Reason:
Command failed (workunit test cephtool/test.sh) on smithi157 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=e6b3c32ff8a98061f42429a5a24ed6b6da1c6ea7 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh' |
||||||||||||||
fail | 6389942 | 2021-09-14 18:03:00 | 2021-09-14 18:31:25 | 2021-09-14 22:03:19 | 3:31:54 | 3:21:01 | 0:10:53 | smithi | master | ubuntu | 20.04 | rados/cephadm/with-work/{0-distro/ubuntu_20.04 fixed-2 mode/packaged mon_election/classic msgr/async start tasks/rados_api_tests} | 2 | |
Failure Reason:
Command failed (workunit test rados/test.sh) on smithi016 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=e6b3c32ff8a98061f42429a5a24ed6b6da1c6ea7 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh' |
||||||||||||||
pass | 6389943 | 2021-09-14 18:03:01 | 2021-09-14 18:31:46 | 2021-09-14 18:56:43 | 0:24:57 | 0:17:55 | 0:07:02 | smithi | master | rhel | 8.4 | rados/singleton/{all/max-pg-per-osd.from-mon mon_election/classic msgr-failures/none msgr/async objectstore/filestore-xfs rados supported-random-distro$/{rhel_8}} | 1 | |
fail | 6389944 | 2021-09-14 18:03:02 | 2021-09-14 18:31:46 | 2021-09-14 18:49:26 | 0:17:40 | 0:07:54 | 0:09:46 | smithi | master | centos | 8.2 | rados/cephadm/mgr-nfs-upgrade/{0-centos_8.2_container_tools_3.0 1-bootstrap/16.2.5 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |
Failure Reason:
Command failed on smithi150 with status 126: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v16.2.5 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid fca99918-158b-11ec-8c25-001a4aab830c -- ceph mon dump -f json' |
||||||||||||||
fail | 6389945 | 2021-09-14 18:03:03 | 2021-09-14 18:32:26 | 2021-09-14 19:05:50 | 0:33:24 | 0:26:02 | 0:07:22 | smithi | master | rhel | 8.4 | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{rhel_8} tasks/rados_stress_watch} | 2 | |
Failure Reason:
"2021-09-14T18:55:51.959385+0000 mon.a (mon.0) 236 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
fail | 6389946 | 2021-09-14 18:03:04 | 2021-09-14 18:32:46 | 2021-09-14 19:55:54 | 1:23:08 | 1:10:27 | 0:12:41 | smithi | master | centos | 8.stream | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{default} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-delay msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8.stream} thrashers/mapgap thrashosds-health workloads/radosbench} | 2 | |
Failure Reason:
reached maximum tries (500) after waiting for 3000 seconds |
||||||||||||||
pass | 6389947 | 2021-09-14 18:03:04 | 2021-09-14 18:32:57 | 2021-09-14 18:51:51 | 0:18:54 | 0:10:06 | 0:08:48 | smithi | master | centos | 8.3 | rados/singleton-nomsgr/{all/health-warnings mon_election/classic rados supported-random-distro$/{centos_8}} | 1 | |
fail | 6389948 | 2021-09-14 18:03:05 | 2021-09-14 18:32:57 | 2021-09-14 19:19:52 | 0:46:55 | 0:38:13 | 0:08:42 | smithi | master | centos | 8.stream | rados/standalone/{supported-random-distro$/{centos_8.stream} workloads/erasure-code} | 1 | |
Failure Reason:
Command failed (workunit test erasure-code/test-erasure-eio.sh) on smithi119 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=e6b3c32ff8a98061f42429a5a24ed6b6da1c6ea7 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/erasure-code/test-erasure-eio.sh' |
||||||||||||||
pass | 6389949 | 2021-09-14 18:03:06 | 2021-09-14 18:32:57 | 2021-09-14 18:58:52 | 0:25:55 | 0:17:54 | 0:08:01 | smithi | master | rhel | 8.4 | rados/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_rhel8 1-start 2-services/basic 3-final} | 2 | |
pass | 6389950 | 2021-09-14 18:03:07 | 2021-09-14 18:33:47 | 2021-09-14 19:02:09 | 0:28:22 | 0:17:27 | 0:10:55 | smithi | master | ubuntu | 20.04 | rados/cephadm/smoke/{distro/ubuntu_20.04 fixed-2 mon_election/classic start} | 2 | |
pass | 6389951 | 2021-09-14 18:03:08 | 2021-09-14 18:34:18 | 2021-09-14 18:59:19 | 0:25:01 | 0:19:25 | 0:05:36 | smithi | master | rhel | 8.4 | rados/mgr/{clusters/{2-node-mgr} debug/mgr mon_election/connectivity objectstore/bluestore-comp-zstd supported-random-distro$/{rhel_8} tasks/workunits} | 2 | |
pass | 6389952 | 2021-09-14 18:03:09 | 2021-09-14 18:34:28 | 2021-09-14 19:03:20 | 0:28:52 | 0:22:19 | 0:06:33 | smithi | master | rhel | 8.4 | rados/singleton/{all/max-pg-per-osd.from-primary mon_election/connectivity msgr-failures/few msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{rhel_8}} | 1 | |
pass | 6389953 | 2021-09-14 18:03:10 | 2021-09-14 18:34:28 | 2021-09-14 18:53:34 | 0:19:06 | 0:10:52 | 0:08:14 | smithi | master | ubuntu | 20.04 | rados/perf/{ceph mon_election/classic objectstore/bluestore-low-osd-mem-target openstack scheduler/dmclock_1Shard_16Threads settings/optimized ubuntu_latest workloads/radosbench_4K_rand_read} | 1 | |
pass | 6389954 | 2021-09-14 18:03:11 | 2021-09-14 18:34:28 | 2021-09-14 18:55:06 | 0:20:38 | 0:12:46 | 0:07:52 | smithi | master | centos | 8.2 | rados/cephadm/workunits/{0-distro/centos_8.2_container_tools_3.0 mon_election/classic task/test_orch_cli} | 1 | |
fail | 6389955 | 2021-09-14 18:03:12 | 2021-09-14 18:34:50 | 2021-09-14 19:10:01 | 0:35:11 | 0:21:05 | 0:14:06 | smithi | master | ubuntu | 20.04 | rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 1-rook 2-workload/none 3-final cluster/3-node k8s/1.21 net/calico rook/master} | 3 | |
Failure Reason:
'wait for operator' reached maximum tries (90) after waiting for 900 seconds |
||||||||||||||
fail | 6389956 | 2021-09-14 18:03:12 | 2021-09-14 18:36:40 | 2021-09-14 19:04:18 | 0:27:38 | 0:17:16 | 0:10:22 | smithi | master | centos | 8.2 | rados/dashboard/{centos_8.2_container_tools_3.0 debug/mgr mon_election/connectivity random-objectstore$/{bluestore-comp-snappy} tasks/e2e} | 2 | |
Failure Reason:
Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi045 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=e6b3c32ff8a98061f42429a5a24ed6b6da1c6ea7 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh' |
||||||||||||||
pass | 6389957 | 2021-09-14 18:03:13 | 2021-09-14 18:37:41 | 2021-09-14 18:55:34 | 0:17:53 | 0:07:05 | 0:10:48 | smithi | master | ubuntu | 20.04 | rados/singleton-nomsgr/{all/large-omap-object-warnings mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} | 1 | |
fail | 6389958 | 2021-09-14 18:03:14 | 2021-09-14 18:38:11 | 2021-09-14 19:12:04 | 0:33:53 | 0:22:27 | 0:11:26 | smithi | master | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{default} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-dispatch-delay msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{ubuntu_latest} thrashers/morepggrow thrashosds-health workloads/redirect} | 2 | |
Failure Reason:
Command failed on smithi037 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd dump --format=json' |
||||||||||||||
pass | 6389959 | 2021-09-14 18:03:15 | 2021-09-14 18:39:01 | 2021-09-14 19:09:19 | 0:30:18 | 0:16:52 | 0:13:26 | smithi | master | ubuntu | 20.04 | rados/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 1-start 2-services/client-keyring 3-final} | 2 | |
dead | 6389960 | 2021-09-14 18:03:16 | 2021-09-14 18:41:02 | 2021-09-15 06:52:12 | 12:11:10 | smithi | master | centos | 8.stream | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/connectivity msgr-failures/few objectstore/bluestore-comp-zstd rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{centos_8.stream} thrashers/default thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 6389961 | 2021-09-14 18:03:17 | 2021-09-14 18:43:02 | 2021-09-14 19:16:44 | 0:33:42 | 0:21:48 | 0:11:54 | smithi | master | centos | 8.2 | rados/cephadm/thrash/{0-distro/centos_8.2_container_tools_3.0 1-start 2-thrash 3-tasks/snaps-few-objects fixed-2 msgr/async-v1only root} | 2 | |
Failure Reason:
timeout expired in wait_until_healthy |
||||||||||||||
dead | 6389962 | 2021-09-14 18:03:18 | 2021-09-14 18:49:08 | 2021-09-15 06:58:34 | 12:09:26 | smithi | master | centos | 8.3 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/few rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{centos_8} thrashers/minsize_recovery thrashosds-health workloads/ec-small-objects-fast-read-overwrites} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 6389963 | 2021-09-14 18:03:19 | 2021-09-14 18:49:18 | 2021-09-14 21:44:39 | 2:55:21 | 2:45:04 | 0:10:17 | smithi | master | centos | 8.3 | rados/objectstore/{backends/filestore-idempotent-aio-journal supported-random-distro$/{centos_8}} | 1 | |
Failure Reason:
"2021-09-14T21:35:19.432188+0000 mon.a (mon.0) 370 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
pass | 6389964 | 2021-09-14 18:03:20 | 2021-09-14 18:49:28 | 2021-09-14 19:20:29 | 0:31:01 | 0:22:22 | 0:08:39 | smithi | master | centos | 8.2 | rados/cephadm/with-work/{0-distro/centos_8.2_container_tools_3.0 fixed-2 mode/root mon_election/connectivity msgr/async-v1only start tasks/rados_python} | 2 | |
pass | 6389965 | 2021-09-14 18:03:20 | 2021-09-14 18:49:29 | 2021-09-14 19:12:29 | 0:23:00 | 0:14:01 | 0:08:59 | smithi | master | centos | 8.stream | rados/singleton/{all/max-pg-per-osd.from-replica mon_election/classic msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8.stream}} | 1 | |
dead | 6389966 | 2021-09-14 18:03:21 | 2021-09-14 18:49:29 | 2021-09-15 07:00:21 | 12:10:52 | smithi | master | centos | 8.3 | rados/singleton-nomsgr/{all/lazy_omap_stats_output mon_election/classic rados supported-random-distro$/{centos_8}} | 1 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6389967 | 2021-09-14 18:03:22 | 2021-09-14 18:51:19 | 2021-09-14 20:12:21 | 1:21:02 | 0:33:34 | 0:47:28 | smithi | master | centos | 8.3 | rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/pacific backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{centos_latest} mon_election/connectivity msgr-failures/fastclose rados thrashers/morepggrow thrashosds-health workloads/test_rbd_api} | 3 | |
pass | 6389968 | 2021-09-14 18:03:23 | 2021-09-14 18:53:00 | 2021-09-14 19:18:08 | 0:25:08 | 0:14:26 | 0:10:42 | smithi | master | centos | 8.2 | rados/cephadm/smoke/{distro/centos_8.2_container_tools_3.0 fixed-2 mon_election/connectivity start} | 2 | |
fail | 6389969 | 2021-09-14 18:03:24 | 2021-09-14 18:53:00 | 2021-09-14 20:20:00 | 1:27:00 | 1:02:13 | 0:24:47 | smithi | master | centos | 8.3 | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zstd rados tasks/mon_recovery validater/valgrind} | 2 | |
Failure Reason:
"2021-09-14T20:12:07.877644+0000 mon.a (mon.0) 168 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
dead | 6389970 | 2021-09-14 18:03:25 | 2021-09-14 18:53:10 | 2021-09-15 07:03:29 | 12:10:19 | smithi | master | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/fastclose msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{ubuntu_latest} thrashers/none thrashosds-health workloads/redirect_promote_tests} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6389971 | 2021-09-14 18:03:26 | 2021-09-14 18:54:11 | 2021-09-14 19:26:16 | 0:32:05 | 0:13:13 | 0:18:52 | smithi | master | centos | 8.3 | rados/cephadm/smoke-singlehost/{0-distro$/{centos_8.3_container_tools_3.0} 1-start 2-services/rgw 3-final} | 1 | |
pass | 6389972 | 2021-09-14 18:03:27 | 2021-09-14 18:55:11 | 2021-09-14 19:19:49 | 0:24:38 | 0:17:27 | 0:07:11 | smithi | master | rhel | 8.4 | rados/multimon/{clusters/3 mon_election/connectivity msgr-failures/many msgr/async-v2only no_pools objectstore/bluestore-comp-zstd rados supported-random-distro$/{rhel_8} tasks/mon_clock_with_skews} | 2 | |
dead | 6389973 | 2021-09-14 18:03:28 | 2021-09-14 18:56:41 | 2021-09-15 07:07:02 | 12:10:21 | smithi | master | rhel | 8.4 | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/few objectstore/bluestore-comp-zstd rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{rhel_8} thrashers/default thrashosds-health workloads/ec-rados-plugin=lrc-k=4-m=2-l=3} | 3 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
dead | 6389974 | 2021-09-14 18:03:29 | 2021-09-14 18:59:02 | 2021-09-15 07:08:14 | 12:09:12 | smithi | master | ubuntu | 20.04 | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/few objectstore/bluestore-comp-zstd rados recovery-overrides/{more-active-recovery} supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 6389975 | 2021-09-14 18:03:30 | 2021-09-14 18:59:22 | 2021-09-14 19:34:28 | 0:35:06 | 0:28:07 | 0:06:59 | smithi | master | rhel | 8.4 | rados/monthrash/{ceph clusters/9-mons mon_election/connectivity msgr-failures/mon-delay msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{rhel_8} thrashers/sync-many workloads/rados_5925} | 2 | |
Failure Reason:
"2021-09-14T19:25:27.045878+0000 mon.i (mon.7) 317 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
dead | 6389976 | 2021-09-14 18:03:31 | 2021-09-14 18:59:23 | 2021-09-15 07:07:46 | 12:08:23 | smithi | master | centos | 8.3 | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/few objectstore/bluestore-comp-zstd rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/ec-small-objects-fast-read} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 6389977 | 2021-09-14 18:03:31 | 2021-09-14 18:59:24 | 2021-09-14 19:25:24 | 0:26:00 | 0:13:55 | 0:12:05 | smithi | master | ubuntu | 20.04 | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async objectstore/filestore-xfs rados supported-random-distro$/{ubuntu_latest} tasks/rados_striper} | 2 | |
Failure Reason:
"2021-09-14T19:15:59.659053+0000 mon.a (mon.0) 206 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
pass | 6389978 | 2021-09-14 18:03:32 | 2021-09-14 19:00:14 | 2021-09-14 19:41:31 | 0:41:17 | 0:29:26 | 0:11:51 | smithi | master | ubuntu | 20.04 | rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04-15.2.9 2-repo_digest/repo_digest 3-start-upgrade 4-wait mon_election/classic} | 2 | |
pass | 6389979 | 2021-09-14 18:03:33 | 2021-09-14 19:02:15 | 2021-09-14 19:28:18 | 0:26:03 | 0:19:25 | 0:06:38 | smithi | master | rhel | 8.4 | rados/singleton/{all/mon-auth-caps mon_election/connectivity msgr-failures/none msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{rhel_8}} | 1 | |
fail | 6389980 | 2021-09-14 18:03:34 | 2021-09-14 19:03:25 | 2021-09-14 19:32:57 | 0:29:32 | 0:18:40 | 0:10:52 | smithi | master | centos | 8.stream | rados/singleton-nomsgr/{all/librados_hello_world mon_election/connectivity rados supported-random-distro$/{centos_8.stream}} | 1 | |
Failure Reason:
"2021-09-14T19:22:44.320487+0000 mon.a (mon.0) 129 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
pass | 6389981 | 2021-09-14 18:03:35 | 2021-09-14 19:04:25 | 2021-09-14 19:23:06 | 0:18:41 | 0:09:19 | 0:09:22 | smithi | master | centos | 8.2 | rados/cephadm/workunits/{0-distro/centos_8.2_container_tools_3.0 mon_election/connectivity task/test_adoption} | 1 | |
pass | 6389982 | 2021-09-14 18:03:36 | 2021-09-14 19:04:26 | 2021-09-14 19:25:00 | 0:20:34 | 0:09:13 | 0:11:21 | smithi | master | ubuntu | 20.04 | rados/perf/{ceph mon_election/connectivity objectstore/bluestore-stupid openstack scheduler/dmclock_default_shards settings/optimized ubuntu_latest workloads/radosbench_4K_seq_read} | 1 | |
dead | 6389983 | 2021-09-14 18:03:37 | 2021-09-14 19:05:56 | 2021-09-15 07:15:48 | 12:09:52 | smithi | master | rhel | 8.4 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{default} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{rhel_8} thrashers/pggrow thrashosds-health workloads/redirect_set_object} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6389984 | 2021-09-14 18:03:38 | 2021-09-14 19:06:16 | 2021-09-14 19:29:31 | 0:23:15 | 0:13:39 | 0:09:36 | smithi | master | centos | 8.2 | rados/cephadm/smoke-roleless/{0-distro/centos_8.2_container_tools_3.0 1-start 2-services/iscsi 3-final} | 2 | |
pass | 6389985 | 2021-09-14 18:03:39 | 2021-09-14 19:06:37 | 2021-09-14 20:07:49 | 1:01:12 | 0:14:36 | 0:46:36 | smithi | master | centos | 8.3 | rados/cephadm/smoke/{distro/centos_8.3_container_tools_3.0 fixed-2 mon_election/classic start} | 2 | |
pass | 6389986 | 2021-09-14 18:03:39 | 2021-09-14 19:07:07 | 2021-09-14 19:24:33 | 0:17:26 | 0:08:40 | 0:08:46 | smithi | master | ubuntu | 20.04 | rados/singleton/{all/mon-config-key-caps mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 6389987 | 2021-09-14 18:03:40 | 2021-09-14 19:07:17 | 2021-09-14 20:06:42 | 0:59:25 | 0:11:48 | 0:47:37 | smithi | master | centos | 8.3 | rados/mgr/{clusters/{2-node-mgr} debug/mgr mon_election/classic objectstore/bluestore-hybrid supported-random-distro$/{centos_8} tasks/crash} | 2 | |
pass | 6389988 | 2021-09-14 18:03:41 | 2021-09-14 19:07:38 | 2021-09-14 19:33:49 | 0:26:11 | 0:15:35 | 0:10:36 | smithi | master | ubuntu | 20.04 | rados/singleton-nomsgr/{all/msgr mon_election/classic rados supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 6389989 | 2021-09-14 18:03:42 | 2021-09-14 19:08:58 | 2021-09-14 19:30:09 | 0:21:11 | 0:12:36 | 0:08:35 | smithi | master | ubuntu | 20.04 | rados/standalone/{supported-random-distro$/{ubuntu_latest} workloads/mgr} | 1 | |
pass | 6389990 | 2021-09-14 18:03:43 | 2021-09-14 19:09:18 | 2021-09-14 20:14:06 | 1:04:48 | 0:37:49 | 0:26:59 | smithi | master | centos | 8.3 | rados/valgrind-leaks/{1-start 2-inject-leak/none centos_latest} | 1 | |
pass | 6389991 | 2021-09-14 18:03:44 | 2021-09-14 19:09:18 | 2021-09-14 19:37:10 | 0:27:52 | 0:16:52 | 0:11:00 | smithi | master | centos | 8.2 | rados/cephadm/workunits/{0-distro/centos_8.2_container_tools_3.0 mon_election/classic task/test_cephadm} | 1 | |
pass | 6389992 | 2021-09-14 18:03:45 | 2021-09-14 19:09:29 | 2021-09-14 20:00:42 | 0:51:13 | 0:14:32 | 0:36:41 | smithi | master | centos | 8.3 | rados/cephadm/smoke-roleless/{0-distro/centos_8.3_container_tools_3.0 1-start 2-services/mirror 3-final} | 2 | |
dead | 6389993 | 2021-09-14 18:03:46 | 2021-09-14 19:09:50 | 2021-09-15 07:19:33 | 12:09:43 | smithi | master | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-delay msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/set-chunks-read} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 6389994 | 2021-09-14 18:03:47 | 2021-09-14 19:10:10 | 2021-09-14 20:22:59 | 1:12:49 | 1:02:31 | 0:10:18 | smithi | master | centos | 8.2 | rados/cephadm/thrash/{0-distro/centos_8.2_container_tools_3.0 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async-v2only root} | 2 | |
Failure Reason:
wait_for_clean: failed before timeout expired |
||||||||||||||
pass | 6389995 | 2021-09-14 18:03:47 | 2021-09-14 19:10:10 | 2021-09-14 19:33:11 | 0:23:01 | 0:12:07 | 0:10:54 | smithi | master | ubuntu | 20.04 | rados/singleton/{all/mon-config-keys mon_election/connectivity msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{ubuntu_latest}} | 1 | |
dead | 6389996 | 2021-09-14 18:03:48 | 2021-09-14 19:12:01 | 2021-09-15 07:21:57 | 12:09:56 | smithi | master | rhel | 8.4 | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/osd-delay objectstore/bluestore-hybrid rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 6389997 | 2021-09-14 18:03:49 | 2021-09-14 19:12:31 | 2021-09-14 20:23:49 | 1:11:18 | 0:26:02 | 0:45:16 | smithi | master | centos | 8.3 | rados/singleton-nomsgr/{all/multi-backfill-reject mon_election/connectivity rados supported-random-distro$/{centos_8}} | 2 | |
Failure Reason:
Command failed on smithi120 with status 11: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg deep-scrub 2.3e' |
||||||||||||||
fail | 6389998 | 2021-09-14 18:03:50 | 2021-09-14 19:16:52 | 2021-09-14 23:17:26 | 4:00:34 | 3:19:19 | 0:41:15 | smithi | master | centos | 8.3 | rados/cephadm/with-work/{0-distro/centos_8.3_container_tools_3.0 fixed-2 mode/packaged mon_election/classic msgr/async-v2only start tasks/rados_api_tests} | 2 | |
Failure Reason:
Command failed (workunit test rados/test.sh) on smithi029 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=e6b3c32ff8a98061f42429a5a24ed6b6da1c6ea7 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh' |
||||||||||||||
fail | 6389999 | 2021-09-14 18:03:52 | 2021-09-14 19:18:12 | 2021-09-14 22:17:26 | 2:59:14 | 2:48:27 | 0:10:47 | smithi | master | ubuntu | 20.04 | rados/objectstore/{backends/filestore-idempotent supported-random-distro$/{ubuntu_latest}} | 1 | |
Failure Reason:
"2021-09-14T22:09:17.654470+0000 mon.a (mon.0) 410 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
pass | 6390000 | 2021-09-14 18:03:53 | 2021-09-14 19:19:53 | 2021-09-14 19:57:03 | 0:37:10 | 0:27:47 | 0:09:23 | smithi | master | centos | 8.2 | rados/cephadm/mgr-nfs-upgrade/{0-centos_8.2_container_tools_3.0 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |
fail | 6390001 | 2021-09-14 18:03:54 | 2021-09-14 19:19:53 | 2021-09-14 20:07:29 | 0:47:36 | 0:40:27 | 0:07:09 | smithi | master | rhel | 8.4 | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{rhel_8} tasks/rados_workunit_loadgen_big} | 2 | |
Failure Reason:
"2021-09-14T19:57:59.203902+0000 mon.a (mon.0) 252 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
pass | 6390002 | 2021-09-14 18:03:55 | 2021-09-14 19:20:33 | 2021-09-14 19:45:41 | 0:25:08 | 0:19:29 | 0:05:39 | smithi | master | rhel | 8.4 | rados/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_3.0 1-start 2-services/nfs-ingress-rgw 3-final} | 2 | |
fail | 6390003 | 2021-09-14 18:03:56 | 2021-09-14 19:20:34 | 2021-09-15 02:12:03 | 6:51:29 | 6:14:46 | 0:36:43 | smithi | master | centos | 8.3 | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-hybrid rados tasks/rados_api_tests validater/lockdep} | 2 | |
Failure Reason:
Command failed (workunit test rados/test.sh) on smithi086 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=e6b3c32ff8a98061f42429a5a24ed6b6da1c6ea7 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 ALLOW_TIMEOUTS=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh' |
||||||||||||||
dead | 6390004 | 2021-09-14 18:03:57 | 2021-09-14 19:24:34 | 2021-09-15 07:33:59 | 12:09:25 | smithi | master | rhel | 8.4 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{default} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-dispatch-delay msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{rhel_8} thrashers/default thrashosds-health workloads/small-objects-balanced} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6390005 | 2021-09-14 18:03:58 | 2021-09-14 19:25:25 | 2021-09-14 19:50:21 | 0:24:56 | 0:19:16 | 0:05:40 | smithi | master | rhel | 8.4 | rados/singleton/{all/mon-config mon_election/classic msgr-failures/none msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{rhel_8}} | 1 | |
pass | 6390006 | 2021-09-14 18:03:59 | 2021-09-14 19:25:25 | 2021-09-14 19:44:43 | 0:19:18 | 0:10:30 | 0:08:48 | smithi | master | ubuntu | 20.04 | rados/perf/{ceph mon_election/classic objectstore/bluestore-basic-min-osd-mem-target openstack scheduler/wpq_default_shards settings/optimized ubuntu_latest workloads/radosbench_4M_rand_read} | 1 | |
fail | 6390007 | 2021-09-14 18:04:00 | 2021-09-14 19:25:25 | 2021-09-14 20:08:56 | 0:43:31 | 0:35:32 | 0:07:59 | smithi | master | rhel | 8.4 | rados/singleton-nomsgr/{all/osd_stale_reads mon_election/classic rados supported-random-distro$/{rhel_8}} | 1 | |
Failure Reason:
"2021-09-14T19:59:57.530453+0000 mon.a (mon.0) 397 : cluster [WRN] Health check failed: 1 daemons have recently crashed (RECENT_CRASH)" in cluster log |
||||||||||||||
pass | 6390008 | 2021-09-14 18:04:01 | 2021-09-14 19:26:26 | 2021-09-14 19:53:28 | 0:27:02 | 0:18:26 | 0:08:36 | smithi | master | rhel | 8.4 | rados/cephadm/smoke/{distro/rhel_8.4_container_tools_3.0 fixed-2 mon_election/connectivity start} | 2 | |
pass | 6390009 | 2021-09-14 18:04:02 | 2021-09-14 19:28:26 | 2021-09-14 19:58:41 | 0:30:15 | 0:22:30 | 0:07:45 | smithi | master | rhel | 8.4 | rados/multimon/{clusters/6 mon_election/classic msgr-failures/few msgr/async no_pools objectstore/bluestore-hybrid rados supported-random-distro$/{rhel_8} tasks/mon_recovery} | 2 | |
dead | 6390010 | 2021-09-14 18:04:02 | 2021-09-14 19:29:36 | 2021-09-15 07:42:55 | 12:13:19 | smithi | master | centos | 8.3 | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/osd-delay objectstore/bluestore-hybrid rados recovery-overrides/{more-async-recovery} supported-random-distro$/{centos_8} thrashers/fastread thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} | 3 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
dead | 6390011 | 2021-09-14 18:04:03 | 2021-09-14 19:33:57 | 2021-09-15 07:43:43 | 12:09:46 | smithi | master | centos | 8.3 | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/osd-delay objectstore/bluestore-hybrid rados recovery-overrides/{default} supported-random-distro$/{centos_8} thrashers/mapgap thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 6390012 | 2021-09-14 18:04:04 | 2021-09-14 19:34:38 | 2021-09-14 23:12:43 | 3:38:05 | 3:13:20 | 0:24:45 | smithi | master | centos | 8.3 | rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{centos_8} thrashers/sync workloads/rados_api_tests} | 2 | |
Failure Reason:
Command failed (workunit test rados/test.sh) on smithi090 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=e6b3c32ff8a98061f42429a5a24ed6b6da1c6ea7 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh' |
||||||||||||||
fail | 6390013 | 2021-09-14 18:04:05 | 2021-09-14 19:38:38 | 2021-09-14 20:28:25 | 0:49:47 | 0:38:44 | 0:11:03 | smithi | master | rhel | 8.4 | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/osd-delay objectstore/bluestore-hybrid rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{rhel_8} thrashers/fastread thrashosds-health workloads/ec-small-objects-many-deletes} | 2 | |
Failure Reason:
Command failed on smithi041 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd dump --format=json' |
||||||||||||||
pass | 6390014 | 2021-09-14 18:04:06 | 2021-09-14 19:41:39 | 2021-09-14 19:57:37 | 0:15:58 | 0:06:13 | 0:09:45 | smithi | master | centos | 8.2 | rados/cephadm/workunits/{0-distro/centos_8.2_container_tools_3.0 mon_election/connectivity task/test_cephadm_repos} | 1 | |
dead | 6390015 | 2021-09-14 18:04:07 | 2021-09-14 19:42:29 | 2021-09-15 07:53:50 | 12:11:21 | smithi | master | centos | 8.stream | rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/osd-delay rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{centos_8.stream} thrashers/morepggrow thrashosds-health workloads/ec-small-objects-overwrites} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6390016 | 2021-09-14 18:04:08 | 2021-09-14 19:44:50 | 2021-09-14 20:25:19 | 0:40:29 | 0:28:18 | 0:12:11 | smithi | master | ubuntu | 20.04 | rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04 2-repo_digest/defaut 3-start-upgrade 4-wait mon_election/connectivity} | 2 | |
fail | 6390017 | 2021-09-14 18:04:09 | 2021-09-14 19:45:50 | 2021-09-14 20:44:20 | 0:58:30 | 0:42:53 | 0:15:37 | smithi | master | centos | 8.stream | rados/singleton/{all/osd-backfill mon_election/connectivity msgr-failures/few msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_8.stream}} | 1 | |
Failure Reason:
"2021-09-14T20:34:07.898891+0000 mon.c (mon.2) 583 : cluster [WRN] Health check failed: 1 daemons have recently crashed (RECENT_CRASH)" in cluster log |
||||||||||||||
dead | 6390018 | 2021-09-14 18:04:10 | 2021-09-14 19:49:11 | 2021-09-15 07:58:09 | 12:08:58 | smithi | master | centos | 8.stream | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{default} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/fastclose msgr/async objectstore/filestore-xfs rados supported-random-distro$/{centos_8.stream} thrashers/mapgap thrashosds-health workloads/small-objects-localized} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6390019 | 2021-09-14 18:04:11 | 2021-09-14 19:49:11 | 2021-09-14 20:14:03 | 0:24:52 | 0:18:39 | 0:06:13 | smithi | master | rhel | 8.4 | rados/singleton-nomsgr/{all/pool-access mon_election/connectivity rados supported-random-distro$/{rhel_8}} | 1 | |
pass | 6390020 | 2021-09-14 18:04:12 | 2021-09-14 19:49:11 | 2021-09-14 20:15:51 | 0:26:40 | 0:20:50 | 0:05:50 | smithi | master | rhel | 8.4 | rados/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_rhel8 1-start 2-services/nfs-ingress 3-final} | 2 | |
dead | 6390021 | 2021-09-14 18:04:13 | 2021-09-14 19:49:11 | 2021-09-15 07:58:30 | 12:09:19 | smithi | master | centos | 8.3 | rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus-v1only backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{centos_latest} mon_election/classic msgr-failures/few rados thrashers/none thrashosds-health workloads/cache-snaps} | 3 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6390022 | 2021-09-14 18:04:14 | 2021-09-14 19:49:22 | 2021-09-14 20:15:29 | 0:26:07 | 0:17:57 | 0:08:10 | smithi | master | rhel | 8.4 | rados/cephadm/smoke/{distro/rhel_8.4_container_tools_rhel8 fixed-2 mon_election/classic start} | 2 | |
pass | 6390023 | 2021-09-14 18:04:15 | 2021-09-14 19:50:22 | 2021-09-14 20:17:32 | 0:27:10 | 0:14:03 | 0:13:07 | smithi | master | centos | 8.3 | rados/mgr/{clusters/{2-node-mgr} debug/mgr mon_election/connectivity objectstore/bluestore-low-osd-mem-target supported-random-distro$/{centos_8} tasks/failover} | 2 | |
pass | 6390024 | 2021-09-14 18:04:15 | 2021-09-14 19:52:02 | 2021-09-14 20:22:27 | 0:30:25 | 0:21:25 | 0:09:00 | smithi | master | centos | 8.2 | rados/cephadm/workunits/{0-distro/centos_8.2_container_tools_3.0 mon_election/classic task/test_nfs} | 1 | |
fail | 6390025 | 2021-09-14 18:04:16 | 2021-09-14 19:53:33 | 2021-09-14 20:41:06 | 0:47:33 | 0:37:51 | 0:09:42 | smithi | master | ubuntu | 20.04 | rados/singleton/{all/osd-recovery-incomplete mon_election/classic msgr-failures/many msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{ubuntu_latest}} | 1 | |
Failure Reason:
Command failed on smithi060 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json' |
||||||||||||||
fail | 6390026 | 2021-09-14 18:04:17 | 2021-09-14 19:53:33 | 2021-09-14 20:30:31 | 0:36:58 | 0:24:40 | 0:12:18 | smithi | master | centos | 8.2 | rados/cephadm/thrash/{0-distro/centos_8.2_container_tools_3.0 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async root} | 2 | |
Failure Reason:
timeout expired in wait_until_healthy |
||||||||||||||
fail | 6390027 | 2021-09-14 18:04:18 | 2021-09-14 19:55:34 | 2021-09-14 23:16:49 | 3:21:15 | 3:10:34 | 0:10:41 | smithi | master | centos | 8.3 | rados/singleton-bluestore/{all/cephtool mon_election/classic msgr-failures/many msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{centos_8}} | 1 | |
Failure Reason:
Command failed (workunit test cephtool/test.sh) on smithi180 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=e6b3c32ff8a98061f42429a5a24ed6b6da1c6ea7 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh' |
||||||||||||||
fail | 6390028 | 2021-09-14 18:04:19 | 2021-09-14 19:56:04 | 2021-09-14 20:52:32 | 0:56:28 | 0:46:24 | 0:10:04 | smithi | master | centos | 8.3 | rados/singleton-nomsgr/{all/recovery-unfound-found mon_election/classic rados supported-random-distro$/{centos_8}} | 1 | |
Failure Reason:
Command failed on smithi027 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json' |
||||||||||||||
pass | 6390029 | 2021-09-14 18:04:20 | 2021-09-14 19:56:04 | 2021-09-14 21:00:44 | 1:04:40 | 0:53:46 | 0:10:54 | smithi | master | ubuntu | 20.04 | rados/standalone/{supported-random-distro$/{ubuntu_latest} workloads/misc} | 1 | |
dead | 6390030 | 2021-09-14 18:04:21 | 2021-09-14 19:57:04 | 2021-09-15 08:07:12 | 12:10:08 | smithi | master | centos | 8.3 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/few msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{centos_8} thrashers/morepggrow thrashosds-health workloads/small-objects} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 6390031 | 2021-09-14 18:04:22 | 2021-09-14 19:57:45 | 2021-09-14 20:37:55 | 0:40:10 | 0:30:49 | 0:09:21 | smithi | master | rhel | 8.4 | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8} tasks/rados_workunit_loadgen_mix} | 2 | |
Failure Reason:
"2021-09-14T20:26:38.519921+0000 mon.a (mon.0) 177 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
pass | 6390032 | 2021-09-14 18:04:23 | 2021-09-14 19:58:45 | 2021-09-14 20:39:49 | 0:41:04 | 0:32:20 | 0:08:44 | smithi | master | rhel | 8.4 | rados/cephadm/with-work/{0-distro/rhel_8.4_container_tools_3.0 fixed-2 mode/root mon_election/connectivity msgr/async start tasks/rados_python} | 2 | |
pass | 6390033 | 2021-09-14 18:04:24 | 2021-09-14 20:00:46 | 2021-09-14 20:24:10 | 0:23:24 | 0:08:53 | 0:14:31 | smithi | master | ubuntu | 20.04 | rados/perf/{ceph mon_election/connectivity objectstore/bluestore-bitmap openstack scheduler/dmclock_1Shard_16Threads settings/optimized ubuntu_latest workloads/radosbench_4M_seq_read} | 1 | |
pass | 6390034 | 2021-09-14 18:04:24 | 2021-09-14 20:06:47 | 2021-09-14 20:25:39 | 0:18:52 | 0:09:32 | 0:09:20 | smithi | master | centos | 8.stream | rados/objectstore/{backends/fusestore supported-random-distro$/{centos_8.stream}} | 1 | |
dead | 6390035 | 2021-09-14 18:04:25 | 2021-09-14 20:06:47 | 2021-09-15 08:16:39 | 12:09:52 | smithi | master | centos | 8.stream | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-low-osd-mem-target rados recovery-overrides/{more-active-recovery} supported-random-distro$/{centos_8.stream} thrashers/default thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6390036 | 2021-09-14 18:04:26 | 2021-09-14 20:07:58 | 2021-09-14 20:39:57 | 0:31:59 | 0:20:06 | 0:11:53 | smithi | master | ubuntu | 20.04 | rados/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 1-start 2-services/nfs-ingress2 3-final} | 2 | |
fail | 6390037 | 2021-09-14 18:04:27 | 2021-09-14 20:08:58 | 2021-09-14 20:51:32 | 0:42:34 | 0:26:07 | 0:16:27 | smithi | master | centos | 8.3 | rados/singleton/{all/osd-recovery mon_election/connectivity msgr-failures/none msgr/async objectstore/filestore-xfs rados supported-random-distro$/{centos_8}} | 1 | |
Failure Reason:
Command failed on smithi176 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json' |
||||||||||||||
pass | 6390038 | 2021-09-14 18:04:28 | 2021-09-14 20:12:29 | 2021-09-14 20:40:10 | 0:27:41 | 0:16:46 | 0:10:55 | smithi | master | ubuntu | 20.04 | rados/cephadm/smoke/{distro/ubuntu_20.04 fixed-2 mon_election/connectivity start} | 2 | |
fail | 6390039 | 2021-09-14 18:04:29 | 2021-09-14 20:12:29 | 2021-09-14 20:39:44 | 0:27:15 | 0:17:19 | 0:09:56 | smithi | master | centos | 8.stream | rados/singleton-nomsgr/{all/version-number-sanity mon_election/connectivity rados supported-random-distro$/{centos_8.stream}} | 1 | |
Failure Reason:
"2021-09-14T20:31:12.715949+0000 mon.a (mon.0) 121 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
dead | 6390040 | 2021-09-14 18:04:29 | 2021-09-14 20:12:39 | 2021-09-15 08:23:06 | 12:10:27 | smithi | master | rhel | 8.4 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-delay msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8} thrashers/none thrashosds-health workloads/snaps-few-objects-balanced} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6390041 | 2021-09-14 18:04:30 | 2021-09-14 20:14:09 | 2021-09-14 20:35:18 | 0:21:09 | 0:12:16 | 0:08:53 | smithi | master | centos | 8.2 | rados/cephadm/workunits/{0-distro/centos_8.2_container_tools_3.0 mon_election/connectivity task/test_orch_cli} | 1 | |
fail | 6390042 | 2021-09-14 18:04:31 | 2021-09-14 20:14:10 | 2021-09-15 00:40:04 | 4:25:54 | 4:15:32 | 0:10:22 | smithi | master | centos | 8.3 | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/connectivity msgr-failures/few msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados tasks/rados_cls_all validater/valgrind} | 2 | |
Failure Reason:
Command failed (workunit test cls/test_cls_rgw.sh) on smithi110 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=e6b3c32ff8a98061f42429a5a24ed6b6da1c6ea7 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_rgw.sh' |
||||||||||||||
pass | 6390043 | 2021-09-14 18:04:32 | 2021-09-14 20:14:50 | 2021-09-14 20:38:37 | 0:23:47 | 0:13:39 | 0:10:08 | smithi | master | centos | 8.2 | rados/cephadm/smoke-roleless/{0-distro/centos_8.2_container_tools_3.0 1-start 2-services/nfs 3-final} | 2 | |
pass | 6390044 | 2021-09-14 18:04:33 | 2021-09-14 20:15:30 | 2021-09-14 20:37:38 | 0:22:08 | 0:07:34 | 0:14:34 | smithi | master | ubuntu | 20.04 | rados/multimon/{clusters/9 mon_election/connectivity msgr-failures/many msgr/async-v1only no_pools objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{ubuntu_latest} tasks/mon_clock_no_skews} | 3 | |
dead | 6390045 | 2021-09-14 18:04:34 | 2021-09-14 20:17:41 | 2021-09-15 08:27:59 | 12:10:18 | smithi | master | rhel | 8.4 | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-low-osd-mem-target rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{rhel_8} thrashers/mapgap thrashosds-health workloads/ec-rados-plugin=lrc-k=4-m=2-l=3} | 3 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
dead | 6390046 | 2021-09-14 18:04:35 | 2021-09-14 20:20:01 | 2021-09-15 08:31:10 | 12:11:09 | smithi | master | centos | 8.stream | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-low-osd-mem-target rados recovery-overrides/{more-active-recovery} supported-random-distro$/{centos_8.stream} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 6390047 | 2021-09-14 18:04:35 | 2021-09-14 20:23:02 | 2021-09-14 21:34:55 | 1:11:53 | 1:04:19 | 0:07:34 | smithi | master | rhel | 8.4 | rados/monthrash/{ceph clusters/9-mons mon_election/connectivity msgr-failures/mon-delay msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{rhel_8} thrashers/force-sync-many workloads/rados_mon_osdmap_prune} | 2 | |
Failure Reason:
"2021-09-14T21:25:34.786649+0000 mon.d (mon.6) 11298 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
pass | 6390048 | 2021-09-14 18:04:36 | 2021-09-14 20:23:55 | 2021-09-14 20:43:08 | 0:19:13 | 0:09:07 | 0:10:06 | smithi | master | centos | 8.3 | rados/singleton/{all/peer mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{centos_8}} | 1 | |
dead | 6390049 | 2021-09-14 18:04:37 | 2021-09-14 20:24:15 | 2021-09-15 08:34:08 | 12:09:53 | smithi | master | centos | 8.stream | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-low-osd-mem-target rados recovery-overrides/{more-async-recovery} supported-random-distro$/{centos_8.stream} thrashers/minsize_recovery thrashosds-health workloads/ec-small-objects} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6390050 | 2021-09-14 18:04:38 | 2021-09-14 20:25:26 | 2021-09-14 20:53:04 | 0:27:38 | 0:14:43 | 0:12:55 | smithi | master | centos | 8.3 | rados/cephadm/smoke-roleless/{0-distro/centos_8.3_container_tools_3.0 1-start 2-services/nfs2 3-final} | 2 | |
fail | 6390051 | 2021-09-14 18:04:39 | 2021-09-14 20:38:38 | 2021-09-15 02:13:19 | 5:34:41 | 5:22:12 | 0:12:29 | smithi | master | ubuntu | 20.04 | rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 1-rook 2-workload/radosbench 3-final cluster/3-node k8s/1.21 net/calico rook/1.7.0} | 3 | |
Failure Reason:
'check osd count' reached maximum tries (90) after waiting for 900 seconds |
||||||||||||||
fail | 6390052 | 2021-09-14 18:04:40 | 2021-09-14 20:39:58 | 2021-09-14 21:31:06 | 0:51:08 | 0:41:16 | 0:09:52 | smithi | master | centos | 8.2 | rados/dashboard/{centos_8.2_container_tools_3.0 debug/mgr mon_election/connectivity random-objectstore$/{filestore-xfs} tasks/dashboard} | 2 | |
Failure Reason:
Test failure: test_pool_create_with_ecp_and_rule (tasks.mgr.dashboard.test_pool.PoolTest) |
||||||||||||||
fail | 6390053 | 2021-09-14 18:04:41 | 2021-09-14 20:39:59 | 2021-09-14 21:05:23 | 0:25:24 | 0:15:52 | 0:09:32 | smithi | master | centos | 8.3 | rados/singleton-nomsgr/{all/admin_socket_output mon_election/connectivity rados supported-random-distro$/{centos_8}} | 1 | |
Failure Reason:
"2021-09-14T20:57:06.678668+0000 mon.a (mon.0) 214 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
fail | 6390054 | 2021-09-14 18:04:42 | 2021-09-14 20:40:19 | 2021-09-14 21:23:59 | 0:43:40 | 0:33:01 | 0:10:39 | smithi | master | centos | 8.3 | rados/upgrade/parallel/{0-distro$/{centos_8.3_container_tools_3.0} 0-start 1-tasks mon_election/connectivity upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} | 2 | |
Failure Reason:
Command failed (workunit test cls/test_cls_cmpomap.sh) on smithi008 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=pacific TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_cmpomap.sh' |
||||||||||||||
dead | 6390055 | 2021-09-14 18:04:42 | 2021-09-14 20:41:09 | 2021-09-15 08:53:18 | 12:12:09 | smithi | master | centos | 8.2 | rados/cephadm/thrash/{0-distro/centos_8.2_container_tools_3.0 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async-v1only root} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
dead | 6390056 | 2021-09-14 18:04:43 | 2021-09-14 20:44:30 | 2021-09-15 09:00:24 | 12:15:54 | smithi | master | rhel | 8.4 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{default} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-dispatch-delay msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{rhel_8} thrashers/pggrow thrashosds-health workloads/snaps-few-objects-localized} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 6390057 | 2021-09-14 18:04:44 | 2021-09-15 01:13:34 | 2021-09-15 04:49:38 | 3:36:04 | 3:29:01 | 0:07:03 | smithi | master | rhel | 8.4 | rados/cephadm/with-work/{0-distro/rhel_8.4_container_tools_rhel8 fixed-2 mode/packaged mon_election/classic msgr/async-v1only start tasks/rados_api_tests} | 2 | |
Failure Reason:
Command failed (workunit test rados/test.sh) on smithi157 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=e6b3c32ff8a98061f42429a5a24ed6b6da1c6ea7 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh' |
||||||||||||||
pass | 6390058 | 2021-09-14 18:04:45 | 2021-09-15 01:13:44 | 2021-09-15 01:35:38 | 0:21:54 | 0:11:57 | 0:09:57 | smithi | master | centos | 8.stream | rados/singleton/{all/pg-autoscaler-progress-off mon_election/connectivity msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8.stream}} | 2 | |
pass | 6390059 | 2021-09-14 18:04:46 | 2021-09-15 01:14:45 | 2021-09-15 01:32:15 | 0:17:30 | 0:09:02 | 0:08:28 | smithi | master | ubuntu | 20.04 | rados/perf/{ceph mon_election/classic objectstore/bluestore-comp openstack scheduler/dmclock_default_shards settings/optimized ubuntu_latest workloads/radosbench_4M_write} | 1 | |
pass | 6390060 | 2021-09-14 18:04:47 | 2021-09-15 01:14:45 | 2021-09-15 01:38:59 | 0:24:14 | 0:12:07 | 0:12:07 | smithi | master | ubuntu | 20.04 | rados/mgr/{clusters/{2-node-mgr} debug/mgr mon_election/classic objectstore/bluestore-stupid supported-random-distro$/{ubuntu_latest} tasks/insights} | 2 | |
pass | 6390061 | 2021-09-14 18:04:48 | 2021-09-15 01:15:05 | 2021-09-15 01:58:03 | 0:42:58 | 0:25:43 | 0:17:15 | smithi | master | centos | 8.2 | rados/cephadm/mgr-nfs-upgrade/{0-centos_8.2_container_tools_3.0 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |
fail | 6390062 | 2021-09-14 18:04:48 | 2021-09-15 01:22:56 | 2021-09-15 02:15:02 | 0:52:06 | 0:38:47 | 0:13:19 | smithi | master | ubuntu | 20.04 | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest} tasks/rados_workunit_loadgen_mostlyread} | 2 | |
Failure Reason:
"2021-09-15T02:00:51.739913+0000 mon.a (mon.0) 543 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
pass | 6390063 | 2021-09-14 18:04:49 | 2021-09-15 01:23:17 | 2021-09-15 01:47:00 | 0:23:43 | 0:09:51 | 0:13:52 | smithi | master | centos | 8.3 | rados/singleton-nomsgr/{all/balancer mon_election/classic rados supported-random-distro$/{centos_8}} | 1 | |
pass | 6390064 | 2021-09-14 18:04:50 | 2021-09-15 01:28:07 | 2021-09-15 02:01:16 | 0:33:09 | 0:24:02 | 0:09:07 | smithi | master | rhel | 8.4 | rados/cephadm/orchestrator_cli/{0-random-distro$/{rhel_8.4_container_tools_3.0} 2-node-mgr orchestrator_cli} | 2 | |
pass | 6390065 | 2021-09-14 18:04:51 | 2021-09-15 01:30:08 | 2021-09-15 01:55:36 | 0:25:28 | 0:13:52 | 0:11:36 | smithi | master | centos | 8.2 | rados/cephadm/smoke/{distro/centos_8.2_container_tools_3.0 fixed-2 mon_election/classic start} | 2 | |
dead | 6390066 | 2021-09-14 18:04:52 | 2021-09-15 01:32:18 | 2021-09-15 13:44:26 | 12:12:08 | smithi | master | centos | 8.3 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/fastclose msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/snaps-few-objects} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6390067 | 2021-09-14 18:04:53 | 2021-09-15 01:35:41 | 2021-09-15 01:58:47 | 0:23:06 | 0:14:47 | 0:08:19 | smithi | master | centos | 8.stream | rados/objectstore/{backends/keyvaluedb supported-random-distro$/{centos_8.stream}} | 1 | |
dead | 6390068 | 2021-09-14 18:04:54 | 2021-09-15 01:36:01 | 2021-09-15 13:46:37 | 12:10:36 | smithi | master | centos | 8.3 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/osd-dispatch-delay rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{centos_8} thrashers/pggrow thrashosds-health workloads/ec-snaps-few-objects-overwrites} | 2 | |||
Failure Reason:
hit max job timeout |