Name Machine Type Up Locked Locked Since Locked By OS Type OS Version Arch Description
smithi205.front.sepia.ceph.com smithi True True 2021-07-25 07:48:54.621073 scheduled_kchai@teuthology centos 8 x86_64 /home/teuthworker/archive/kchai-2021-07-25_06:46:11-rados-wip-kefu-testing-2021-07-25-1126-distro-basic-smithi/6291733
Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
running 6291733 2021-07-25 06:50:28 2021-07-25 07:48:44 2021-07-25 08:19:59 0:32:30 smithi master centos 8.2 rados/cephadm/thrash/{0-distro/centos_8.2_kubic_stable 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async-v1only root} 2
pass 6291690 2021-07-25 06:49:44 2021-07-25 07:29:19 2021-07-25 07:48:48 0:19:29 0:10:58 0:08:31 smithi master ubuntu 20.04 rados/singleton/{all/max-pg-per-osd.from-replica mon_election/connectivity msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-zlib rados supported-random-distro$/{ubuntu_latest}} 1
pass 6291582 2021-07-25 06:47:45 2021-07-25 06:48:35 2021-07-25 07:29:21 0:40:46 0:30:02 0:10:44 smithi master centos 8.3 rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-stupid rados recovery-overrides/{default} supported-random-distro$/{centos_8} thrashers/fastread thrashosds-health workloads/ec-rados-plugin=jerasure-k=2-m=1} 2
pass 6291321 2021-07-25 03:37:14 2021-07-25 06:15:04 2021-07-25 06:44:09 0:29:05 0:22:13 0:06:52 smithi master rhel 8.4 rados/singleton/{all/max-pg-per-osd.from-replica mon_election/classic msgr-failures/many msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{rhel_8}} 1
pass 6291085 2021-07-25 03:34:09 2021-07-25 04:05:31 2021-07-25 06:14:55 2:09:24 1:56:26 0:12:58 smithi master ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/nautilus-v2only backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/osd-delay rados thrashers/default thrashosds-health workloads/radosbench} 3
pass 6290996 2021-07-25 03:32:56 2021-07-25 03:32:58 2021-07-25 04:06:32 0:33:34 0:22:38 0:10:56 smithi master rhel 8.4 rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/osd-dispatch-delay objectstore/bluestore-comp-zstd rados recovery-overrides/{more-async-recovery} supported-random-distro$/{rhel_8} thrashers/default thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4
dead 6290730 2021-07-25 02:46:45 2021-07-25 03:06:31 2021-07-25 03:22:05 0:15:34 smithi master ubuntu 20.04 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{default} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{ubuntu_latest} thrashers/morepggrow thrashosds-health workloads/admin_socket_objecter_requests} 2
Failure Reason:

Error reimaging machines: reached maximum tries (60) after waiting for 900 seconds

fail 6290664 2021-07-25 02:45:42 2021-07-25 02:45:43 2021-07-25 03:06:58 0:21:15 0:13:23 0:07:52 smithi master rhel 8.3 rados/cephadm/smoke/{distro/rhel_8.3_kubic_stable fixed-2 mon_election/connectivity start} 2
Failure Reason:

Command failed on smithi149 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:f7cd730b96380f0a18fbfd486fd4354c294b17eb -v bootstrap --fsid 1f00f3ae-ecf5-11eb-8c23-001a4aab830c --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id y --orphan-initial-daemons --skip-monitoring-stack --mon-ip 172.21.15.149 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring'

pass 6290600 2021-07-24 16:37:14 2021-07-24 21:25:35 2021-07-24 21:46:59 0:21:24 0:13:01 0:08:23 smithi master ubuntu 20.04 rados/perf/{ceph mon_election/connectivity objectstore/bluestore-low-osd-mem-target openstack scheduler/wpq_default_shards settings/optimized ubuntu_latest workloads/fio_4K_rand_rw} 1
pass 6290523 2021-07-24 16:35:46 2021-07-24 20:48:20 2021-07-24 21:25:34 0:37:14 0:30:33 0:06:41 smithi master rhel 8.4 rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8} tasks/rados_workunit_loadgen_mostlyread} 2
pass 6290443 2021-07-24 16:34:02 2021-07-24 20:04:10 2021-07-24 20:48:12 0:44:02 0:32:39 0:11:23 smithi master centos 8.2 rados/cephadm/thrash/{0-distro/centos_8.2_kubic_stable 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async root} 2
pass 6290409 2021-07-24 16:33:14 2021-07-24 19:44:13 2021-07-24 20:04:47 0:20:34 0:08:11 0:12:23 smithi master ubuntu 20.04 rados/singleton/{all/resolve_stuck_peering mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-zlib rados supported-random-distro$/{ubuntu_latest}} 2
pass 6290375 2021-07-24 16:32:27 2021-07-24 19:23:47 2021-07-24 19:44:59 0:21:12 0:12:31 0:08:41 smithi master ubuntu 20.04 rados/perf/{ceph mon_election/classic objectstore/bluestore-stupid openstack scheduler/wpq_default_shards settings/optimized ubuntu_latest workloads/radosbench_4K_rand_read} 1
fail 6290340 2021-07-24 16:31:43 2021-07-24 18:55:22 2021-07-24 19:23:38 0:28:16 0:16:28 0:11:48 smithi master centos 8.3 rados/mgr/{clusters/{2-node-mgr} debug/mgr mon_election/connectivity objectstore/bluestore-comp-zstd supported-random-distro$/{centos_8} tasks/module_selftest} 2
Failure Reason:

Test failure: test_module_commands (tasks.mgr.test_module_selftest.TestModuleSelftest)

pass 6290288 2021-07-24 16:30:28 2021-07-24 18:16:40 2021-07-24 18:56:25 0:39:45 0:28:11 0:11:34 smithi master ubuntu 20.04 rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04-15.2.9 2-repo_digest/repo_digest 3-start-upgrade 4-wait mon_election/classic} 2
pass 6290270 2021-07-24 16:30:01 2021-07-24 17:47:22 2021-07-24 18:16:55 0:29:33 0:19:42 0:09:51 smithi master rhel 8.3 rados/cephadm/smoke-roleless/{0-distro/rhel_8.3_kubic_stable 1-start 2-services/iscsi 3-final} 2
pass 6290248 2021-07-24 16:29:28 2021-07-24 17:11:42 2021-07-24 17:49:50 0:38:08 0:27:45 0:10:23 smithi master centos 8.2 rados/cephadm/mgr-nfs-upgrade/{0-centos_8.2_kubic_stable 1-bootstrap/16.2.5 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
fail 6290203 2021-07-24 16:14:01 2021-07-24 16:14:27 2021-07-24 17:12:43 0:58:16 0:45:43 0:12:33 smithi master rhel 8.3 upgrade/octopus-x/parallel/{0-distro/rhel_8.3_kubic_stable 0-start 1-tasks mon_election/connectivity upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api}} 2
Failure Reason:

Command failed (workunit test cls/test_cls_rgw.sh) on smithi117 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=octopus TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_rgw.sh'

fail 6290144 2021-07-24 14:23:41 2021-07-24 14:53:33 2021-07-24 15:13:35 0:20:02 0:08:31 0:11:31 smithi master ubuntu 18.04 upgrade:nautilus-x/parallel/{0-cluster/{openstack start} 1-ceph-install/nautilus 1.1-pg-log-overrides/short_pg_log 2-workload/{blogbench ec-rados-default rados_api rados_loadgenbig rgw_ragweed_prepare test_rbd_api test_rbd_python} 3-upgrade-sequence/upgrade-mon-osd-mds 4-octopus 5-final-workload/{blogbench rados-snaps-few-objects rados_loadgenmix rados_mon_thrash rbd_cls rbd_import_export rgw rgw_ragweed_check} mon_election/classic objectstore/bluestore-bitmap ubuntu_18.04} 4
Failure Reason:

Command failed on smithi008 with status 100: 'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=16.2.5-94-g8e0e9efa-1bionic cephadm=16.2.5-94-g8e0e9efa-1bionic ceph-mds=16.2.5-94-g8e0e9efa-1bionic ceph-mgr=16.2.5-94-g8e0e9efa-1bionic ceph-common=16.2.5-94-g8e0e9efa-1bionic ceph-fuse=16.2.5-94-g8e0e9efa-1bionic ceph-test=16.2.5-94-g8e0e9efa-1bionic radosgw=16.2.5-94-g8e0e9efa-1bionic python3-rados=16.2.5-94-g8e0e9efa-1bionic python3-rgw=16.2.5-94-g8e0e9efa-1bionic python3-cephfs=16.2.5-94-g8e0e9efa-1bionic python3-rbd=16.2.5-94-g8e0e9efa-1bionic libcephfs2=16.2.5-94-g8e0e9efa-1bionic libcephfs-dev=16.2.5-94-g8e0e9efa-1bionic librados2=16.2.5-94-g8e0e9efa-1bionic librbd1=16.2.5-94-g8e0e9efa-1bionic rbd-fuse=16.2.5-94-g8e0e9efa-1bionic'

fail 6290107 2021-07-24 14:22:59 2021-07-24 14:23:37 2021-07-24 14:53:52 0:30:15 0:12:10 0:18:05 smithi master ubuntu 18.04 upgrade:octopus-x/stress-split-erasure-code-no-cephadm/{0-cluster/{openstack start} 1-nautilus-install/octopus 1.1-pg-log-overrides/short_pg_log 2-partial-upgrade/firsthalf 3-thrash/default 3.1-objectstore/filestore-xfs 4-ec-workload/{rados-ec-workload rbd-ec-workload} 5-finish-upgrade 6-pacific 7-final-workload mon_election/classic thrashosds-health ubuntu_18.04} 5
Failure Reason:

Command failed on smithi008 with status 100: 'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=16.2.5-94-g8e0e9efa-1bionic cephadm=16.2.5-94-g8e0e9efa-1bionic ceph-mds=16.2.5-94-g8e0e9efa-1bionic ceph-mgr=16.2.5-94-g8e0e9efa-1bionic ceph-common=16.2.5-94-g8e0e9efa-1bionic ceph-fuse=16.2.5-94-g8e0e9efa-1bionic ceph-test=16.2.5-94-g8e0e9efa-1bionic radosgw=16.2.5-94-g8e0e9efa-1bionic python3-rados=16.2.5-94-g8e0e9efa-1bionic python3-rgw=16.2.5-94-g8e0e9efa-1bionic python3-cephfs=16.2.5-94-g8e0e9efa-1bionic python3-rbd=16.2.5-94-g8e0e9efa-1bionic libcephfs2=16.2.5-94-g8e0e9efa-1bionic libcephfs-dev=16.2.5-94-g8e0e9efa-1bionic librados2=16.2.5-94-g8e0e9efa-1bionic librbd1=16.2.5-94-g8e0e9efa-1bionic rbd-fuse=16.2.5-94-g8e0e9efa-1bionic'