Name Machine Type Up Locked Locked Since Locked By OS Type OS Version Arch Description
smithi063.front.sepia.ceph.com smithi True True 2021-10-24 04:54:27.047465 scheduled_teuthology@teuthology ubuntu 20.04 x86_64 /home/teuthworker/archive/teuthology-2021-10-24_03:31:02-rados-pacific-distro-default-smithi/6458529
Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
running 6458529 2021-10-24 03:35:18 2021-10-24 04:54:26 2021-10-24 05:48:39 0:55:54 smithi master ubuntu 20.04 rados/objectstore/{backends/filestore-idempotent supported-random-distro$/{ubuntu_latest}} 1
pass 6458448 2021-10-24 03:34:19 2021-10-24 04:16:33 2021-10-24 04:53:46 0:37:13 0:23:22 0:13:51 smithi master centos 8.2 rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/few objectstore/bluestore-comp-zlib rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{centos_8} thrashers/pggrow thrashosds-health workloads/ec-small-objects-balanced} 2
fail 6458397 2021-10-24 03:33:40 2021-10-24 03:54:14 2021-10-24 04:18:13 0:23:59 0:11:42 0:12:17 smithi master centos 8.2 rados/cephadm/thrash/{0-distro/centos_8.2_container_tools_3.0 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async-v2only root} 2
Failure Reason:

Command failed on smithi158 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:7879cd16a7aa354d4121719a8e19b4e59da59c81 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid bcef0506-3480-11ec-8c28-001a4aab830c -- ceph mon dump -f json'

fail 6458357 2021-10-24 03:33:09 2021-10-24 03:33:09 2021-10-24 03:54:40 0:21:31 smithi master ubuntu 18.04 rados/perf/{ceph mon_election/classic objectstore/bluestore-low-osd-mem-target openstack scheduler/dmclock_1Shard_16Threads settings/optimized ubuntu_18.04 workloads/radosbench_4K_seq_read} 1
Failure Reason:

Could not reconnect to ubuntu@smithi063.front.sepia.ceph.com

pass 6458285 2021-10-23 22:10:58 2021-10-23 22:30:27 2021-10-23 22:58:58 0:28:31 0:18:55 0:09:36 smithi master rhel 8.4 rados:cephadm/osds/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop 1-start 2-ops/rm-zap-add} 2
pass 6458262 2021-10-23 22:10:35 2021-10-23 22:10:48 2021-10-23 22:33:31 0:22:43 0:12:33 0:10:10 smithi master centos 8.2 rados:cephadm/workunits/{0-distro/centos_8.2_container_tools_3.0 mon_election/connectivity task/test_adoption} 1
pass 6458183 2021-10-23 14:23:16 2021-10-23 14:23:16 2021-10-23 17:02:20 2:39:04 2:21:52 0:17:12 smithi master ubuntu 18.04 upgrade:nautilus-x/parallel/{0-cluster/{openstack start} 1-ceph-install/nautilus 1.1-pg-log-overrides/normal_pg_log 2-workload/{blogbench ec-rados-default rados_api rados_loadgenbig rgw_ragweed_prepare test_rbd_api test_rbd_python} 3-upgrade-sequence/upgrade-mon-osd-mds 4-octopus 5-final-workload/{blogbench rados-snaps-few-objects rados_loadgenmix rados_mon_thrash rbd_cls rbd_import_export rgw rgw_ragweed_check} mon_election/connectivity objectstore/bluestore-bitmap ubuntu_18.04} 4
pass 6457937 2021-10-22 15:47:41 2021-10-22 19:04:19 1215 smithi master rhel 8.4 rados/multimon/{clusters/21 mon_election/connectivity msgr-failures/many msgr/async-v2only no_pools objectstore/filestore-xfs rados supported-random-distro$/{rhel_8} tasks/mon_clock_with_skews} 3
pass 6457890 2021-10-22 15:46:57 2021-10-22 18:13:08 2021-10-22 18:36:44 0:23:36 0:12:38 0:10:58 smithi master centos 8.stream rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8.stream} tasks/scrub_test} 2
pass 6457834 2021-10-22 15:46:04 2021-10-22 17:47:43 2021-10-22 18:13:10 0:25:27 0:15:05 0:10:22 smithi master centos 8.2 rados/cephadm/smoke/{0-nvme-loop distro/centos_8.2_container_tools_3.0 fixed-2 mon_election/classic start} 2
pass 6457768 2021-10-22 15:45:03 2021-10-22 17:15:53 2021-10-22 17:48:01 0:32:08 0:21:55 0:10:13 smithi master ubuntu 20.04 rados/perf/{ceph mon_election/connectivity objectstore/bluestore-stupid openstack scheduler/dmclock_default_shards settings/optimized ubuntu_latest workloads/radosbench_omap_write} 1
pass 6457717 2021-10-22 15:44:15 2021-10-22 16:51:22 2021-10-22 17:12:28 0:21:06 0:11:37 0:09:29 smithi master centos 8.stream rados/singleton-nomsgr/{all/large-omap-object-warnings mon_election/connectivity rados supported-random-distro$/{centos_8.stream}} 1
pass 6457665 2021-10-22 15:43:27 2021-10-22 16:27:30 2021-10-22 16:51:17 0:23:47 0:13:36 0:10:11 smithi master centos 8.2 rados/cephadm/smoke-roleless/{0-distro/centos_8.2_container_tools_3.0 0-nvme-loop 1-start 2-services/nfs2 3-final} 2
pass 6457555 2021-10-22 15:41:42 2021-10-22 15:42:12 2021-10-22 16:27:42 0:45:30 0:35:23 0:10:07 smithi master centos 8.3 rados/singleton-nomsgr/{all/multi-backfill-reject mon_election/classic rados supported-random-distro$/{centos_8}} 2
fail 6457425 2021-10-22 10:10:43 2021-10-22 14:20:17 2021-10-22 14:52:07 0:31:50 0:17:06 0:14:44 smithi master ubuntu 20.04 rbd/librbd/{cache/none clusters/{fixed-3 openstack} config/permit-partial-discard min-compat-client/default msgr-failures/few objectstore/bluestore-stupid pool/replicated-data-pool supported-random-distro$/{ubuntu_latest} workloads/python_api_tests_with_defaults} 3
Failure Reason:

"2021-10-22T14:40:12.892948+0000 mon.a (mon.0) 113 : cluster [WRN] Health check failed: Degraded data redundancy: 2/4 objects degraded (50.000%), 1 pg degraded (PG_DEGRADED)" in cluster log

pass 6457374 2021-10-22 10:09:50 2021-10-22 13:45:34 2021-10-22 14:21:50 0:36:16 0:21:36 0:14:40 smithi master centos 8.3 rbd/qemu/{cache/writethrough clusters/{fixed-3 openstack} features/defaults msgr-failures/few objectstore/filestore-xfs pool/ec-cache-pool supported-random-distro$/{centos_8} workloads/qemu_bonnie} 3
pass 6456157 2021-10-21 17:30:16 2021-10-21 17:45:23 2021-10-21 20:46:47 3:01:24 2:52:02 0:09:22 smithi master centos 8.3 rgw/tools/{centos_latest cluster tasks} 1
dead 6455952 2021-10-21 14:01:00 2021-10-22 01:37:04 2021-10-22 13:50:32 12:13:28 smithi master rhel 8.4 rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-stupid rados recovery-overrides/{more-active-recovery} supported-random-distro$/{rhel_8} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} 3
Failure Reason:

hit max job timeout

pass 6455897 2021-10-21 14:00:13 2021-10-22 01:05:28 2021-10-22 01:36:56 0:31:28 0:23:05 0:08:23 smithi master rhel 8.4 rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/few objectstore/bluestore-low-osd-mem-target rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4
pass 6455848 2021-10-21 13:59:49 2021-10-22 00:43:17 2021-10-22 01:06:21 0:23:04 0:14:26 0:08:38 smithi master centos 8.3 rados/singleton-nomsgr/{all/cache-fs-trunc mon_election/connectivity rados supported-random-distro$/{centos_8}} 1