Name Machine Type Up Locked Locked Since Locked By OS Type OS Version Arch Description
smithi162.front.sepia.ceph.com smithi True False ubuntu 18.04 x86_64 /home/teuthworker/archive/teuthology-2021-09-19_03:31:02-rados-pacific-distro-basic-smithi/6397525
Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 6397525 2021-09-19 03:37:13 2021-09-19 16:25:32 2021-09-19 17:17:49 0:52:17 0:41:29 0:10:48 smithi master ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/mimic backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/fastclose rados thrashers/mapgap thrashosds-health workloads/cache-snaps} 3
pass 6397458 2021-09-19 03:36:20 2021-09-19 15:48:04 2021-09-19 16:26:08 0:38:04 0:27:29 0:10:35 smithi master centos 8.3 rados/cephadm/upgrade/{1-start-distro/1-start-centos_8.3-octopus 2-repo_digest/defaut 3-start-upgrade 4-wait mon_election/classic} 2
pass 6397400 2021-09-19 03:35:34 2021-09-19 15:17:12 2021-09-19 15:48:58 0:31:46 0:20:50 0:10:56 smithi master ubuntu 20.04 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_latest} thrashers/mapgap thrashosds-health workloads/cache-snaps} 2
pass 6397347 2021-09-19 03:34:51 2021-09-19 14:49:44 2021-09-19 15:17:12 0:27:28 0:16:44 0:10:44 smithi master centos 8.3 rados/cephadm/smoke-roleless/{0-distro/centos_8.3_container_tools_3.0 1-start 2-services/nfs-ingress2 3-final} 2
pass 6397293 2021-09-19 03:34:09 2021-09-19 14:23:10 2021-09-19 14:49:48 0:26:38 0:18:54 0:07:44 smithi master rhel 8.4 rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-snappy rados supported-random-distro$/{rhel_8} tasks/rados_cls_all} 2
pass 6397190 2021-09-19 03:32:48 2021-09-19 13:26:01 2021-09-19 14:24:24 0:58:23 0:48:36 0:09:47 smithi master ubuntu 20.04 rados/singleton/{all/osd-recovery mon_election/classic msgr-failures/none msgr/async-v1only objectstore/bluestore-stupid rados supported-random-distro$/{ubuntu_latest}} 1
pass 6397113 2021-09-19 01:03:01 2021-09-19 12:45:45 2021-09-19 13:26:33 0:40:48 0:29:24 0:11:24 smithi master ubuntu 20.04 rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04-15.2.9 2-repo_digest/repo_digest 3-start-upgrade 4-wait mon_election/connectivity} 2
pass 6397075 2021-09-19 01:02:23 2021-09-19 12:24:29 2021-09-19 12:46:49 0:22:20 0:11:41 0:10:39 smithi master ubuntu 20.04 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{default} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/crush-compat mon_election/classic msgr-failures/fastclose msgr/async-v1only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/cache} 2
pass 6397025 2021-09-19 01:01:41 2021-09-19 11:58:59 2021-09-19 12:25:07 0:26:08 0:14:57 0:11:11 smithi master centos 8.stream rados/standalone/{supported-random-distro$/{centos_8.stream} workloads/c2c} 1
pass 6396974 2021-09-19 01:00:51 2021-09-19 11:26:43 2021-09-19 11:59:47 0:33:04 0:22:43 0:10:21 smithi master centos 8.3 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/on mon_election/connectivity msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8} thrashers/mapgap thrashosds-health workloads/cache-snaps-balanced} 2
dead 6396518 2021-09-18 18:24:18 2021-09-18 23:13:18 2021-09-19 11:26:20 12:13:02 smithi master centos 8.stream rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/osd-delay rados recovery-overrides/{more-active-recovery} supported-random-distro$/{centos_8.stream} thrashers/fastread thrashosds-health workloads/ec-pool-snaps-few-objects-overwrites} 2
Failure Reason:

hit max job timeout

pass 6396513 2021-09-18 18:24:13 2021-09-18 22:47:44 2021-09-18 23:16:24 0:28:40 0:20:28 0:08:12 smithi master rhel 8.4 rados/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_3.0 1-start 2-services/rgw-ingress 3-final} 2
fail 6396492 2021-09-18 18:23:52 2021-09-18 22:03:13 2021-09-18 22:49:07 0:45:54 0:23:20 0:22:34 smithi master centos 8.3 rados/monthrash/{ceph clusters/3-mons mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8} thrashers/sync workloads/rados_5925} 2
Failure Reason:

"2021-09-18T22:40:30.906333+0000 mon.a (mon.0) 185 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log

pass 6396479 2021-09-18 18:23:39 2021-09-18 21:30:56 2021-09-18 22:03:35 0:32:39 0:21:00 0:11:39 smithi master ubuntu 20.04 rados/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 1-start 2-services/nfs-ingress2 3-final} 2
fail 6396455 2021-09-18 18:23:16 2021-09-18 20:57:45 2021-09-18 21:32:04 0:34:19 0:22:18 0:12:01 smithi master ubuntu 20.04 rados/monthrash/{ceph clusters/9-mons mon_election/classic msgr-failures/mon-delay msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_latest} thrashers/sync-many workloads/pool-create-delete} 2
Failure Reason:

"2021-09-18T21:22:43.928826+0000 mon.a (mon.0) 1601 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log

fail 6396362 2021-09-18 18:21:44 2021-09-18 19:42:06 2021-09-18 20:55:56 1:13:50 1:01:00 0:12:50 smithi master centos 8.stream rados/singleton/{all/osd-backfill mon_election/connectivity msgr-failures/none msgr/async-v1only objectstore/bluestore-stupid rados supported-random-distro$/{centos_8.stream}} 1
Failure Reason:

Command failed on smithi162 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json'

fail 6396315 2021-09-18 18:20:56 2021-09-18 19:08:18 2021-09-18 19:42:39 0:34:21 0:22:46 0:11:35 smithi master centos 8.2 rados/cephadm/thrash/{0-distro/centos_8.2_container_tools_3.0 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async-v1only root} 2
Failure Reason:

timeout expired in wait_until_healthy

fail 6396269 2021-09-18 17:42:11 2021-09-18 18:35:54 2021-09-18 19:09:32 0:33:38 0:19:06 0:14:32 smithi master centos 8.3 rgw/verify/{0-install centos_latest clusters/fixed-2 datacache/rgw-datacache frontend/beast ignore-pg-availability msgr-failures/few objectstore/filestore-xfs overrides proto/https rgw_pool_type/ec-profile sharding$/{default} striping$/{stripe-greater-than-chunk} tasks/{cls ragweed reshard s3tests-java s3tests} validater/lockdep} 2
Failure Reason:

Command failed (workunit test cls/test_cls_2pc_queue.sh) on smithi112 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=23fc5283f5c5d0c57ded20679194b3bc26f6ee2d TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_2pc_queue.sh'

pass 6396209 2021-09-18 14:47:25 2021-09-18 16:24:47 2021-09-18 18:38:04 2:13:17 2:01:06 0:12:11 smithi master centos 8.3 fs/thrash/workloads/{begin clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount overrides/{distro/testing/{flavor/centos_latest k-testing} ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-bitmap overrides/{session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/osd 2-workunit/fs/snaps}} 2
fail 6396192 2021-09-18 14:47:08 2021-09-18 15:46:58 2021-09-18 16:26:46 0:39:48 0:29:39 0:10:09 smithi master ubuntu 20.04 fs/workload/{begin clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount overrides/{distro/testing/{flavor/ubuntu_latest k-testing} ms-die-on-skipped}} ms_mode/{secure} objectstore-ec/bluestore-comp-ec-root omap_limit/10 overrides/{osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 scrub/yes standby-replay tasks/{0-check-counter workunit/suites/fsx} wsync/{yes}} 3
Failure Reason:

"2021-09-18T16:04:20.439105+0000 mon.a (mon.0) 154 : cluster [WRN] Health check failed: Degraded data redundancy: 2/4 objects degraded (50.000%), 1 pg degraded (PG_DEGRADED)" in cluster log