Name Machine Type Up Locked Locked Since Locked By OS Type OS Version Arch Description
smithi140.front.sepia.ceph.com smithi True True 2021-10-24 05:08:16.183242 scheduled_teuthology@teuthology rhel 8.4 x86_64 /home/teuthworker/archive/teuthology-2021-10-24_03:31:02-rados-pacific-distro-default-smithi/6458550
Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
running 6458550 2021-10-24 03:35:34 2021-10-24 05:07:15 2021-10-24 05:19:42 0:12:44 smithi master rhel 8.4 rados/dashboard/{centos_8.2_container_tools_3.0 clusters/{2-node-mgr} debug/mgr mon_election/connectivity random-objectstore$/{bluestore-comp-snappy} supported-random-distro$/{rhel_8} tasks/dashboard} 2
pass 6458456 2021-10-24 03:34:25 2021-10-24 04:20:37 2021-10-24 05:08:06 0:47:29 0:38:17 0:09:12 smithi master centos 8.2 rados/cephadm/thrash/{0-distro/centos_8.2_container_tools_3.0 1-start 2-thrash 3-tasks/snaps-few-objects fixed-2 msgr/async-v1only root} 2
pass 6458353 2021-10-24 03:33:06 2021-10-24 03:33:06 2021-10-24 04:20:35 0:47:29 0:33:50 0:13:39 smithi master centos 8.2 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-zstd rados tasks/mon_recovery validater/valgrind} 2
fail 6458294 2021-10-23 22:11:06 2021-10-23 22:40:31 2021-10-23 22:59:59 0:19:28 0:09:31 0:09:57 smithi master centos 8.3 rados:cephadm/smoke-roleless/{0-distro/centos_8.3_container_tools_3.0 0-nvme-loop 1-start 2-services/nfs-ingress-rgw 3-final} 2
Failure Reason:

Command failed on smithi140 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:c5cb50e88f3244d611efa2678536dc8e0844d223 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 6ade2a20-3454-11ec-8c28-001a4aab830c -- ceph mon dump -f json'

fail 6458266 2021-10-23 22:10:39 2021-10-23 22:40:33 834 smithi master centos 8.3 rados:cephadm/osds/{0-distro/centos_8.3_container_tools_3.0 0-nvme-loop 1-start 2-ops/rm-zap-add} 2
Failure Reason:

Command failed on smithi140 with status 127: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:c5cb50e88f3244d611efa2678536dc8e0844d223 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 3c5bdb78-3451-11ec-8c28-001a4aab830c -- ceph mon dump -f json'

pass 6458171 2021-10-23 14:23:06 2021-10-23 14:23:06 2021-10-23 16:57:23 2:34:17 2:16:43 0:17:34 smithi master ubuntu 18.04 upgrade:nautilus-x/parallel/{0-cluster/{openstack start} 1-ceph-install/nautilus 1.1-pg-log-overrides/normal_pg_log 2-workload/{blogbench ec-rados-default rados_api rados_loadgenbig rgw_ragweed_prepare test_rbd_api test_rbd_python} 3-upgrade-sequence/upgrade-all 4-octopus 5-final-workload/{blogbench rados-snaps-few-objects rados_loadgenmix rados_mon_thrash rbd_cls rbd_import_export rgw rgw_ragweed_check} mon_election/classic objectstore/bluestore-bitmap ubuntu_18.04} 4
pass 6457929 2021-10-22 15:47:34 2021-10-22 19:14:40 1810 smithi master centos 8.stream rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/osd-dispatch-delay objectstore/bluestore-stupid rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{centos_8.stream} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} 2
pass 6457877 2021-10-22 15:46:45 2021-10-22 18:06:52 2021-10-22 18:33:11 0:26:19 0:13:52 0:12:27 smithi master ubuntu 20.04 rados/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/basic 3-final} 2
pass 6457822 2021-10-22 15:45:53 2021-10-22 17:41:27 2021-10-22 18:07:06 0:25:39 0:13:09 0:12:30 smithi master centos 8.3 rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/osd-dispatch-delay objectstore/bluestore-comp-zstd rados recovery-overrides/{more-active-recovery} supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4
pass 6457769 2021-10-22 15:45:04 2021-10-22 17:15:54 2021-10-22 17:41:36 0:25:42 0:15:37 0:10:05 smithi master centos 8.3 rados/singleton/{all/backfill-toofull mon_election/classic msgr-failures/none msgr/async-v2only objectstore/filestore-xfs rados supported-random-distro$/{centos_8}} 1
pass 6457728 2021-10-22 15:44:26 2021-10-22 16:54:26 2021-10-22 17:15:54 0:21:28 0:11:47 0:09:41 smithi master centos 8.stream rados/singleton-nomsgr/{all/lazy_omap_stats_output mon_election/classic rados supported-random-distro$/{centos_8.stream}} 1
fail 6457680 2021-10-22 15:43:41 2021-10-22 16:34:56 2021-10-22 16:54:33 0:19:37 0:09:28 0:10:09 smithi master centos 8.3 rados/cephadm/smoke-roleless/{0-distro/centos_8.3_container_tools_3.0 0-nvme-loop 1-start 2-services/rgw-ingress 3-final} 2
Failure Reason:

Command failed on smithi160 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:c83793b6e74517d7a189371dfa1407db77a2dba7 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 3962e8e8-3358-11ec-8c28-001a4aab830c -- ceph mon dump -f json'

pass 6457646 2021-10-22 15:43:09 2021-10-22 16:17:12 2021-10-22 16:35:05 0:17:53 0:07:16 0:10:37 smithi master ubuntu 20.04 rados/singleton-nomsgr/{all/ceph-kvstore-tool mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} 1
pass 6457565 2021-10-22 15:41:51 2021-10-22 15:42:16 2021-10-22 16:17:02 0:34:46 0:18:47 0:15:59 smithi master centos 8.stream rados/multimon/{clusters/9 mon_election/classic msgr-failures/few msgr/async-v2only no_pools objectstore/filestore-xfs rados supported-random-distro$/{centos_8.stream} tasks/mon_recovery} 3
pass 6457428 2021-10-22 10:10:46 2021-10-22 14:23:18 2021-10-22 14:47:15 0:23:57 0:11:45 0:12:12 smithi master centos 8.3 rbd/qemu/{cache/writethrough clusters/{fixed-3 openstack} features/readbalance msgr-failures/few objectstore/bluestore-stupid pool/replicated-data-pool supported-random-distro$/{centos_8} workloads/qemu_xfstests} 3
pass 6457361 2021-10-22 10:09:37 2021-10-22 13:40:29 2021-10-22 14:23:47 0:43:18 0:31:12 0:12:06 smithi master centos 8.stream rbd/librbd/{cache/writeback clusters/{fixed-3 openstack} config/copy-on-read min-compat-client/default msgr-failures/few objectstore/bluestore-hybrid pool/ec-data-pool supported-random-distro$/{centos_8.stream} workloads/rbd_fio} 3
fail 6457302 2021-10-22 10:08:37 2021-10-22 13:00:41 2021-10-22 13:40:24 0:39:43 0:27:38 0:12:05 smithi master centos 8.stream rbd/librbd/{cache/writethrough clusters/{fixed-3 openstack} config/copy-on-read min-compat-client/octopus msgr-failures/few objectstore/bluestore-comp-snappy pool/replicated-data-pool supported-random-distro$/{centos_8.stream} workloads/c_api_tests} 3
Failure Reason:

"2021-10-22T13:31:16.273167+0000 mon.a (mon.0) 671 : cluster [WRN] Health check failed: Degraded data redundancy: 2/1738 objects degraded (0.115%), 1 pg degraded (PG_DEGRADED)" in cluster log

pass 6457247 2021-10-22 10:07:42 2021-10-22 12:17:24 2021-10-22 13:00:50 0:43:26 0:32:37 0:10:49 smithi master centos 8.stream rbd/librbd/{cache/writearound clusters/{fixed-3 openstack} config/copy-on-read min-compat-client/octopus msgr-failures/few objectstore/filestore-xfs pool/ec-data-pool supported-random-distro$/{centos_8.stream} workloads/c_api_tests_with_journaling} 3
fail 6457205 2021-10-22 10:01:59 2021-10-22 11:56:20 2021-10-22 12:15:39 0:19:19 0:08:08 0:11:11 smithi master centos 8.3 rbd/mirror-thrash/{base/install clients/mirror cluster/{2-node openstack} msgr-failures/few objectstore/bluestore-comp-lz4 policy/none rbd-mirror/four-per-cluster supported-random-distro$/{centos_8} workloads/rbd-mirror-journal-stress-workunit} 2
Failure Reason:

No module named 'tasks'

fail 6457154 2021-10-22 09:15:39 2021-10-22 11:14:40 2021-10-22 11:56:09 0:41:29 0:27:31 0:13:58 smithi master ubuntu 20.04 rbd/librbd/{cache/writethrough clusters/{fixed-3 openstack} config/none min-compat-client/octopus msgr-failures/few objectstore/bluestore-comp-zstd pool/replicated-data-pool supported-random-distro$/{ubuntu_latest} workloads/c_api_tests_with_defaults} 3
Failure Reason:

"2021-10-22T11:45:12.689486+0000 mon.a (mon.0) 862 : cluster [WRN] Health check failed: Degraded data redundancy: 1 pg degraded (PG_DEGRADED)" in cluster log