Name Machine Type Up Locked Locked Since Locked By OS Type OS Version Arch Description
smithi139.front.sepia.ceph.com smithi True True 2021-10-19 05:07:04.079273 scheduled_pdonnell@teuthology ubuntu 20.04 x86_64 /home/teuthworker/archive/pdonnell-2021-10-19_04:32:14-fs-wip-pdonnell-testing-20211019.013028-distro-basic-smithi/6450382
Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
running 6450382 2021-10-19 04:33:24 2021-10-19 05:05:23 2021-10-19 08:38:38 3:34:41 smithi master ubuntu 20.04 fs/thrash/workloads/{begin clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse msgr-failures/none objectstore-ec/bluestore-comp overrides/{frag session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/5 tasks/{1-thrash/osd 2-workunit/fs/snaps}} 2
pass 6449765 2021-10-19 00:32:08 2021-10-19 00:46:01 2021-10-19 01:57:32 1:11:31 1:05:44 0:05:47 smithi master ubuntu 18.04 upgrade:octopus-x/parallel/{0-distro/ubuntu_18.04 0-start 1-tasks mon_election/connectivity upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api}} 2
fail 6449750 2021-10-18 21:44:41 2021-10-18 21:53:45 2021-10-18 22:19:46 0:26:01 0:13:49 0:12:12 smithi master ubuntu 20.04 orch:cephadm:osds/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-ops/rm-zap-add} 2
Failure Reason:

Command failed on smithi023 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:b53c9ab78265f6dad241b4b05aa87603f7e66e27 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid debb7d42-305f-11ec-8c28-001a4aab830c -- bash -c \'set -e\nset -x\nceph orch ps\nceph orch device ls\nDEVID=$(ceph device ls | grep osd.1 | awk \'"\'"\'{print $1}\'"\'"\')\nHOST=$(ceph orch device ls | grep $DEVID | awk \'"\'"\'{print $1}\'"\'"\')\nDEV=$(ceph orch device ls | grep $DEVID | awk \'"\'"\'{print $2}\'"\'"\')\necho "host $HOST, dev $DEV, devid $DEVID"\nceph orch osd rm 1\nwhile ceph orch osd rm status | grep ^1 ; do sleep 5 ; done\nceph orch device zap $HOST $DEV --force\nceph orch daemon add osd $HOST:$DEV\nwhile ! ceph osd dump | grep osd.1 | grep up ; do sleep 5 ; done\n\''

pass 6449738 2021-10-18 20:17:04 2021-10-18 20:37:36 2021-10-18 21:13:39 0:36:03 0:26:34 0:09:29 smithi master centos 8.2 rados/cephadm/mgr-nfs-upgrade/{0-centos_8.2_container_tools_3.0 1-bootstrap/16.2.5 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
pass 6449687 2021-10-18 19:11:50 2021-10-19 02:58:47 2021-10-19 03:30:18 0:31:31 0:21:04 0:10:27 smithi master ubuntu 20.04 rados/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 1-start 2-services/rgw-ingress 3-final} 2
pass 6449622 2021-10-18 19:10:49 2021-10-19 02:22:42 2021-10-19 02:58:46 0:36:04 0:29:34 0:06:30 smithi master ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/few rados thrashers/careful thrashosds-health workloads/rbd_cls} 3
pass 6449590 2021-10-18 19:10:19 2021-10-19 01:57:33 2021-10-19 02:23:02 0:25:29 0:18:25 0:07:04 smithi master rhel 8.4 rados/multimon/{clusters/21 mon_election/connectivity msgr-failures/many msgr/async-v1only no_pools objectstore/bluestore-comp-zstd rados supported-random-distro$/{rhel_8} tasks/mon_clock_no_skews} 3
pass 6449477 2021-10-18 19:08:50 2021-10-19 00:21:57 2021-10-19 00:45:57 0:24:00 0:18:22 0:05:38 smithi master rhel 8.4 rados/singleton-nomsgr/{all/export-after-evict mon_election/connectivity rados supported-random-distro$/{rhel_8}} 1
pass 6449427 2021-10-18 19:08:26 2021-10-18 23:49:54 2021-10-19 00:21:55 0:32:01 0:25:25 0:06:36 smithi master ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/osd-delay rados thrashers/morepggrow thrashosds-health workloads/test_rbd_api} 3
pass 6449393 2021-10-18 19:08:09 2021-10-18 23:25:49 2021-10-18 23:49:50 0:24:01 0:14:11 0:09:50 smithi master centos 8.3 rados/cephadm/smoke/{distro/centos_8.3_container_tools_3.0 fixed-2 mon_election/classic start} 2
pass 6449304 2021-10-18 19:07:26 2021-10-18 22:19:43 2021-10-18 22:59:44 0:40:01 0:28:39 0:11:22 smithi master centos 8.2 rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/filestore-xfs rados recovery-overrides/{default} supported-random-distro$/{centos_8} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} 3
pass 6449246 2021-10-18 19:06:58 2021-10-18 21:13:45 2021-10-18 21:53:48 0:40:03 0:27:54 0:12:09 smithi master ubuntu 20.04 rados/singleton-nomsgr/{all/multi-backfill-reject mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} 2
fail 6449226 2021-10-18 19:06:48 2021-10-19 04:19:04 2021-10-19 05:06:56 0:47:52 0:37:49 0:10:03 smithi master ubuntu 20.04 rados/monthrash/{ceph clusters/3-mons msgr-failures/mon-delay msgr/async-v1only objectstore/bluestore-stupid rados supported-random-distro$/{ubuntu_latest} thrashers/sync workloads/rados_mon_osdmap_prune} 2
Failure Reason:

Command failed (workunit test mon/test_mon_osdmap_prune.sh) on smithi139 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=cd530917c8d3341fa3f414f3db51aa7b9cdf2d6a TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/mon/test_mon_osdmap_prune.sh'

pass 6449191 2021-10-18 19:06:31 2021-10-18 19:59:40 2021-10-18 20:37:42 0:38:02 0:27:41 0:10:21 smithi master centos 8.2 rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/fastclose rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{centos_8} thrashers/fastread thrashosds-health workloads/ec-snaps-few-objects-overwrites} 2
pass 6449164 2021-10-18 19:06:18 2021-10-19 04:03:18 2021-10-19 04:19:19 0:16:01 0:08:55 0:07:06 smithi master ubuntu 18.04 rados/multimon/{clusters/3 msgr-failures/few msgr/async no_pools objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_18.04} tasks/mon_recovery} 2
pass 6449078 2021-10-18 19:05:11 2021-10-19 03:28:03 2021-10-19 04:03:57 0:35:54 0:23:13 0:12:41 smithi master ubuntu 20.04 rados/cephadm/with-work/{0-distro/ubuntu_20.04 fixed-2 mode/packaged msgr/async start tasks/rados_python} 2
fail 6449022 2021-10-18 15:58:37 2021-10-18 16:32:50 2021-10-18 17:14:52 0:42:02 0:30:28 0:11:34 smithi master centos 8.stream smoke/basic/{clusters/{fixed-3-cephfs openstack} objectstore/bluestore-bitmap supported-random-distro$/{centos_8.stream} tasks/{0-install test/rbd_api_tests}} 3
Failure Reason:

"2021-10-18T16:55:52.278673+0000 mon.b (mon.0) 204 : cluster [WRN] pool 'test-librbd-smithi139-30439-9' is full (reached quota's max_bytes: 10 MiB)" in cluster log

pass 6448987 2021-10-18 15:50:26 2021-10-18 17:58:45 2021-10-18 18:22:45 0:24:00 0:18:35 0:05:25 smithi master rhel 8.4 orch:cephadm/smoke/{0-nvme-loop distro/rhel_8.4_container_tools_3.0 fixed-2 mon_election/connectivity start} 2
pass 6448961 2021-10-18 15:50:01 2021-10-18 17:14:42 2021-10-18 17:58:44 0:44:02 0:35:06 0:08:56 smithi master centos 8.2 orch:cephadm/thrash/{0-distro/centos_8.2_container_tools_3.0 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async root} 2
pass 6448926 2021-10-18 15:49:34 2021-10-18 15:51:27 2021-10-18 16:33:29 0:42:02 0:31:35 0:10:27 smithi master centos 8.2 orch:cephadm/thrash/{0-distro/centos_8.2_container_tools_3.0 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async-v1only root} 2