Name Machine Type Up Locked Locked Since Locked By OS Type OS Version Arch Description
smithi036.front.sepia.ceph.com smithi True True 2021-05-11 04:18:36.467013 scheduled_teuthology@teuthology rhel 8.3 x86_64 /home/teuthworker/archive/teuthology-2021-05-11_04:17:02-fs-pacific-distro-basic-smithi/6108468
Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
running 6108468 2021-05-11 04:17:57 2021-05-11 04:18:36 2021-05-11 05:06:57 0:49:52 smithi master rhel 8.3 fs/workload/{begin clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} distro/{rhel_8} mount/fuse objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 scrub/no standby-replay tasks/{0-check-counter workunit/suites/dbench}} 3
pass 6108129 2021-05-11 00:26:36 2021-05-11 01:19:33 2021-05-11 01:48:29 0:28:56 0:19:29 0:09:27 smithi master ubuntu 16.04 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{default} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/upmap msgr-failures/osd-delay msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{ubuntu_16.04} thrashers/pggrow thrashosds-health workloads/cache-snaps} 2
pass 6108070 2021-05-11 00:25:43 2021-05-11 00:57:50 2021-05-11 01:19:25 0:21:35 0:14:10 0:07:25 smithi master rhel 7.9 rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} msgr-failures/few objectstore/bluestore-comp-snappy rados recovery-overrides/{default} supported-random-distro$/{rhel_7} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4
pass 6107963 2021-05-11 00:24:05 2021-05-11 00:24:06 2021-05-11 00:57:52 0:33:46 0:23:39 0:10:07 smithi master centos 7.8 rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal msgr-failures/few objectstore/bluestore-bitmap rados recovery-overrides/{default} supported-random-distro$/{centos_latest} thrashers/careful thrashosds-health workloads/ec-small-objects} 2
pass 6107208 2021-05-09 06:10:15 2021-05-09 23:08:21 2021-05-10 00:39:55 1:31:34 1:18:15 0:13:19 smithi master centos 8.2 rbd/encryption/{cache/writearound clusters/{fixed-3 openstack} features/defaults msgr-failures/few objectstore/bluestore-comp-zstd pool/replicated-data-pool supported-random-distro$/{centos_8} workloads/qemu_xfstests_luks1} 3
pass 6107189 2021-05-09 06:10:00 2021-05-09 22:51:44 2021-05-09 23:08:28 0:16:44 0:07:35 0:09:09 smithi master ubuntu 18.04 rbd/cli/{base/install clusters/{fixed-1 openstack} features/layering msgr-failures/few objectstore/bluestore-comp-zlib pool/ec-data-pool supported-random-distro$/{ubuntu_latest} workloads/rbd_cli_import_export} 1
pass 6107162 2021-05-09 06:09:39 2021-05-09 22:11:13 2021-05-09 22:47:20 0:36:07 0:27:01 0:09:06 smithi master rhel 8.3 rbd/librbd/{cache/writearound clusters/{fixed-3 openstack} config/copy-on-read min-compat-client/default msgr-failures/few objectstore/bluestore-comp-zlib pool/small-cache-pool supported-random-distro$/{rhel_8} workloads/fsx} 3
pass 6107130 2021-05-09 06:09:15 2021-05-09 21:33:30 2021-05-09 22:11:04 0:37:34 0:24:27 0:13:07 smithi master ubuntu 18.04 rbd/qemu/{cache/writethrough clusters/{fixed-3 openstack} features/defaults msgr-failures/few objectstore/bluestore-hybrid pool/ec-data-pool supported-random-distro$/{ubuntu_latest} workloads/qemu_bonnie} 3
pass 6107100 2021-05-09 06:08:49 2021-05-09 21:06:30 2021-05-09 21:35:19 0:28:49 0:16:08 0:12:41 smithi master centos 8.2 rbd/nbd/{base/install cluster/{fixed-3 openstack} msgr-failures/few objectstore/bluestore-hybrid supported-random-distro$/{centos_8} thrashers/cache thrashosds-health workloads/rbd_fsx_nbd} 3
pass 6107058 2021-05-09 06:08:17 2021-05-09 20:37:55 2021-05-09 21:07:52 0:29:57 0:19:40 0:10:17 smithi master ubuntu 18.04 rbd/thrash/{base/install clusters/{fixed-2 openstack} msgr-failures/few objectstore/filestore-xfs supported-random-distro$/{ubuntu_latest} thrashers/cache thrashosds-health workloads/rbd_fsx_journal} 2
pass 6106787 2021-05-09 05:08:26 2021-05-09 20:00:34 2021-05-09 20:38:03 0:37:29 0:26:59 0:10:30 smithi master centos 8.2 rgw/verify/{centos_latest clusters/fixed-2 frontend/civetweb ignore-pg-availability msgr-failures/few objectstore/bluestore-bitmap overrides proto/http rgw_pool_type/replicated sharding$/{single} striping$/{stripe-equals-chunk} tasks/{0-install cls ragweed reshard s3tests-java s3tests} validater/lockdep} 2
pass 6106764 2021-05-09 05:08:08 2021-05-09 19:42:55 2021-05-09 20:00:39 0:17:44 0:07:53 0:09:51 smithi master ubuntu rgw/multifs/{clusters/fixed-2 frontend/civetweb ignore-pg-availability objectstore/filestore-xfs overrides rgw_pool_type/replicated tasks/rgw_multipart_upload} 2
pass 6106615 2021-05-09 04:20:10 2021-05-09 18:01:29 2021-05-09 19:42:45 1:41:16 1:32:02 0:09:14 smithi master ubuntu 20.04 fs/thrash/multifs/{begin clusters/1a3s-mds-2c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/testing/{flavor/ubuntu_latest k-testing} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore/bluestore-bitmap overrides/{frag_enable multifs session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} tasks/{1-thrash/mds 2-workunit/cfuse_workunit_snaptests}} 2
pass 6106529 2021-05-09 04:19:01 2021-05-09 16:46:39 2021-05-09 18:01:20 1:14:41 1:07:50 0:06:51 smithi master rhel 8.3 fs/multifs/{begin clusters/1a3s-mds-2c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore-ec/bluestore-comp overrides/{frag_enable mon-debug whitelist_health whitelist_wrongly_marked_down} tasks/failover} 2
fail 6106250 2021-05-09 03:34:56 2021-05-09 13:14:33 2021-05-09 16:46:36 3:32:03 3:21:26 0:10:37 smithi master centos 8.2 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-stupid rados tasks/rados_cls_all validater/lockdep} 2
Failure Reason:

Command failed on smithi036 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd dump --format=json'

pass 6106170 2021-05-09 03:33:54 2021-05-09 12:20:23 2021-05-09 13:14:27 0:54:04 0:37:39 0:16:25 smithi master ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/luminous backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/fastclose rados thrashers/mapgap thrashosds-health workloads/cache-snaps} 3
pass 6106049 2021-05-08 14:23:38 2021-05-09 09:47:55 2021-05-09 12:22:44 2:34:49 2:13:55 0:20:54 smithi master ubuntu 18.04 upgrade:nautilus-x/parallel/{0-cluster/{openstack start} 1-ceph-install/nautilus 1.1-pg-log-overrides/short_pg_log 2-workload/{blogbench ec-rados-default rados_api rados_loadgenbig rgw_ragweed_prepare test_rbd_api test_rbd_python} 3-upgrade-sequence/upgrade-mon-osd-mds 4-octopus 5-final-workload/{blogbench rados-snaps-few-objects rados_loadgenmix rados_mon_thrash rbd_cls rbd_import_export rgw rgw_ragweed_check} mon_election/connectivity objectstore/bluestore-bitmap ubuntu_18.04} 4
pass 6105993 2021-05-08 14:22:39 2021-05-09 06:15:00 2021-05-09 09:54:52 3:39:52 3:20:17 0:19:35 smithi master ubuntu 18.04 upgrade:octopus-x/stress-split-erasure-code-no-cephadm/{0-cluster/{openstack start} 1-nautilus-install/octopus 1.1-pg-log-overrides/normal_pg_log 2-partial-upgrade/firsthalf 3-thrash/default 3.1-objectstore/bluestore-bitmap 4-ec-workload/{rados-ec-workload rbd-ec-workload} 5-finish-upgrade 6-pacific 7-final-workload mon_election/classic thrashosds-health ubuntu_18.04} 5
fail 6105986 2021-05-08 13:16:54 2021-05-08 13:20:06 2021-05-08 14:02:16 0:42:10 0:32:04 0:10:06 smithi master ubuntu 20.04 rados/standalone/{mon_election/classic supported-random-distro$/{ubuntu_latest} workloads/scrub} 1
Failure Reason:

Command failed (workunit test scrub/osd-scrub-dump.sh) on smithi036 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=5cd0ad0001da189274219f5e829752717eb856d2 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/scrub/osd-scrub-dump.sh'

pass 6105933 2021-05-08 12:17:38 2021-05-09 05:47:06 2021-05-09 06:19:02 0:31:56 0:13:13 0:18:43 smithi master centos 8.2 powercycle/osd/{clusters/3osd-1per-target objectstore/bluestore-comp-snappy powercycle/default supported-all-distro/centos_8 tasks/readwrite thrashosds-health whitelist_health} 4