Name Machine Type Up Locked Locked Since Locked By OS Type OS Version Arch Description
smithi037.front.sepia.ceph.com smithi True False centos 8 x86_64 /home/teuthworker/archive/rfriedma-2021-09-18_10:17:46-rados-wip-rf-scrub-locations-distro-basic-smithi/6396044
Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 6396044 2021-09-18 10:20:54 2021-09-18 10:20:54 2021-09-18 11:02:27 0:41:33 0:31:35 0:09:58 smithi master centos 8.stream rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/on mon_election/classic msgr-failures/osd-delay msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{centos_8.stream} thrashers/mapgap thrashosds-health workloads/snaps-few-objects-localized} 2
pass 6395764 2021-09-17 21:15:50 2021-09-17 23:17:19 2021-09-18 00:38:03 1:20:44 1:12:10 0:08:34 smithi master centos 8.2 rados/dashboard/{centos_8.2_container_tools_3.0 debug/mgr mon_election/connectivity random-objectstore$/{bluestore-comp-zlib} tasks/dashboard} 2
pass 6395572 2021-09-17 21:12:46 2021-09-17 21:53:03 2021-09-17 23:17:01 1:23:58 1:12:45 0:11:13 smithi master centos 8.2 rados/dashboard/{centos_8.2_container_tools_3.0 debug/mgr mon_election/classic random-objectstore$/{bluestore-comp-zlib} tasks/dashboard} 2
pass 6395528 2021-09-17 21:12:04 2021-09-17 21:29:04 2021-09-17 21:53:59 0:24:55 0:18:25 0:06:30 smithi master rhel 8.4 rados/cephadm/smoke/{distro/rhel_8.4_container_tools_3.0 fixed-2 mon_election/connectivity start} 2
fail 6395382 2021-09-17 20:52:31 2021-09-18 01:26:19 2021-09-18 02:40:09 1:13:50 1:01:31 0:12:19 smithi master centos 8.3 fs/workload/{begin clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} distro/{centos_8} mount/fuse ms_mode/{crc} objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/yes standby-replay tasks/{0-check-counter workunit/kernel_untar_build} wsync/{no}} 3
Failure Reason:

Command failed (workunit test kernel_untar_build.sh) on smithi008 with status 2: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=ed05851e025ccab417559d526627e9f1e0599a4b TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/kernel_untar_build.sh'

pass 6395316 2021-09-17 20:51:34 2021-09-18 00:37:56 2021-09-18 01:27:07 0:49:11 0:37:27 0:11:44 smithi master ubuntu 20.04 fs/volumes/{begin clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/testing/{flavor/ubuntu_latest k-testing} ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/volumes/{overrides test/clone}} 2
fail 6395275 2021-09-17 20:50:48 2021-09-17 20:51:19 2021-09-17 21:29:13 0:37:54 0:24:06 0:13:48 smithi master ubuntu 20.04 fs/workload/{begin clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/testing/{flavor/ubuntu_latest k-testing} ms-die-on-skipped}} ms_mode/{crc} objectstore-ec/bluestore-comp-ec-root omap_limit/10 overrides/{osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 scrub/yes standby-replay tasks/{0-check-counter workunit/fs/test_o_trunc} wsync/{no}} 3
Failure Reason:

"2021-09-17T21:15:50.231136+0000 mon.a (mon.0) 294 : cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log

pass 6395179 2021-09-17 16:25:14 2021-09-17 17:18:22 2021-09-17 17:39:58 0:21:36 0:11:57 0:09:39 smithi master ubuntu 20.04 rgw/thrash/{clusters/fixed-2 frontend/beast install objectstore/bluestore-bitmap thrasher/default thrashosds-health ubuntu_latest workload/rgw_multipart_upload} 2
fail 6393819 2021-09-16 18:43:46 2021-09-16 18:52:16 2021-09-16 19:09:13 0:16:57 0:07:19 0:09:38 smithi master ubuntu 20.04 rgw/multifs/{clusters/fixed-2 frontend/beast ignore-pg-availability objectstore/filestore-xfs overrides rgw_pool_type/replicated tasks/rgw_multipart_upload ubuntu_latest} 2
Failure Reason:

Command failed (workunit test rgw/s3_multipart_upload.pl) on smithi036 with status 255: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=bc749ef43279601f3e9305d4ad8723d440abaa66 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/s3_multipart_upload.pl'

pass 6392778 2021-09-16 16:05:28 2021-09-16 16:19:12 2021-09-16 16:47:10 0:27:58 0:15:06 0:12:52 smithi master ubuntu 20.04 orch:rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 1-rook 2-workload/none 3-final cluster/3-node k8s/1.21 net/calico rook/1.7.0} 3
pass 6392710 2021-09-16 15:58:32 2021-09-16 18:27:46 2021-09-16 18:52:11 0:24:25 0:10:08 0:14:17 smithi master ubuntu 20.04 rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/connectivity msgr-failures/fastclose objectstore/bluestore-hybrid rados recovery-overrides/{default} supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4
pass 6392663 2021-09-16 15:57:51 2021-09-16 18:07:19 2021-09-16 18:27:49 0:20:30 0:13:01 0:07:29 smithi master centos 8.2 rados/cephadm/workunits/{0-distro/centos_8.2_container_tools_3.0 mon_election/connectivity task/test_orch_cli} 1
pass 6392628 2021-09-16 15:57:22 2021-09-16 17:47:57 2021-09-16 18:07:10 0:19:13 0:06:32 0:12:41 smithi master ubuntu 20.04 rados/multimon/{clusters/21 mon_election/connectivity msgr-failures/many msgr/async-v1only no_pools objectstore/bluestore-comp-zstd rados supported-random-distro$/{ubuntu_latest} tasks/mon_clock_no_skews} 3
pass 6392577 2021-09-16 15:56:38 2021-09-16 17:24:19 2021-09-16 17:49:09 0:24:50 0:19:40 0:05:10 smithi master rhel 8.4 rados/singleton-nomsgr/{all/large-omap-object-warnings mon_election/connectivity rados supported-random-distro$/{rhel_8}} 1
pass 6392505 2021-09-16 15:55:34 2021-09-16 16:48:09 2021-09-16 17:24:38 0:36:29 0:29:54 0:06:35 smithi master rhel 8.4 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{default} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{rhel_8} thrashers/none thrashosds-health workloads/small-objects-balanced} 2
pass 6392390 2021-09-16 15:53:44 2021-09-16 15:54:25 2021-09-16 16:19:12 0:24:47 0:14:56 0:09:51 smithi master centos 8.2 rados/cephadm/smoke/{distro/centos_8.2_container_tools_3.0 fixed-2 mon_election/connectivity start} 2
pass 6391772 2021-09-15 21:39:05 2021-09-15 22:00:17 2021-09-16 02:41:54 4:41:37 4:22:50 0:18:47 smithi master ubuntu 20.04 upgrade:octopus-x/stress-split-erasure-code-no-cephadm/{0-cluster/{openstack start} 1-nautilus-install/octopus 1.1-pg-log-overrides/normal_pg_log 2-partial-upgrade/firsthalf 3-thrash/default 3.1-objectstore/bluestore-bitmap 4-ec-workload/{rados-ec-workload rbd-ec-workload} 5-finish-upgrade 6-pacific 7-final-workload mon_election/connectivity thrashosds-health ubuntu_20.04} 5
dead 6391471 2021-09-15 21:23:07 2021-09-16 02:42:03 2021-09-16 14:50:31 12:08:28 smithi master centos 8.3 rados/cephadm/upgrade/{1-start-distro/1-start-centos_8.3-octopus 2-repo_digest/defaut 3-start-upgrade 4-wait mon_election/classic} 2
Failure Reason:

hit max job timeout

pass 6391377 2021-09-15 21:21:39 2021-09-15 21:22:10 2021-09-15 22:02:54 0:40:44 0:30:04 0:10:40 smithi master centos 8.stream rados/monthrash/{ceph clusters/9-mons mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_8.stream} thrashers/sync-many workloads/rados_api_tests} 2
fail 6391163 2021-09-15 18:24:59 2021-09-15 18:25:29 2021-09-15 21:08:36 2:43:07 2:28:30 0:14:37 smithi master centos 8.3 rgw/verify/{0-install centos_latest clusters/fixed-2 datacache/rgw-datacache frontend/beast ignore-pg-availability msgr-failures/few objectstore/bluestore-bitmap overrides proto/https rgw_pool_type/replicated sharding$/{single} striping$/{stripe-greater-than-chunk} tasks/{cls ragweed reshard s3tests-java s3tests} validater/lockdep} 2
Failure Reason:

Command failed on smithi037 with status 1: 'cd /home/ubuntu/cephtest/s3-tests-java && /opt/gradle/gradle/bin/gradle clean test --rerun-tasks --no-build-cache --tests ObjectTest'