Name Machine Type Up Locked Locked Since Locked By OS Type OS Version Arch Description
smithi023.front.sepia.ceph.com smithi True True 2022-01-18 09:26:49.115647 scheduled_teuthology@teuthology ubuntu 20.04 x86_64 /home/teuthworker/archive/teuthology-2022-01-18_04:17:02-fs-pacific-distro-default-smithi/6623233
Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 6623830 2022-01-18 08:46:03 2022-01-18 08:46:14 2022-01-18 09:01:10 0:14:56 0:03:53 0:11:03 smithi master ubuntu 20.04 fs/multiclient/{begin clusters/1-mds-3-client conf/{client mds mon osd} distros/ubuntu_latest mount/fuse objectstore-ec/bluestore-ec-root overrides/{whitelist_health whitelist_wrongly_marked_down} tasks/cephfs_misc_tests} 5
Failure Reason:

Command failed on smithi018 with status 100: 'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=17.0.0-10154-gd2422b0b-1focal cephadm=17.0.0-10154-gd2422b0b-1focal ceph-mds=17.0.0-10154-gd2422b0b-1focal ceph-mgr=17.0.0-10154-gd2422b0b-1focal ceph-common=17.0.0-10154-gd2422b0b-1focal ceph-fuse=17.0.0-10154-gd2422b0b-1focal ceph-test=17.0.0-10154-gd2422b0b-1focal ceph-volume=17.0.0-10154-gd2422b0b-1focal radosgw=17.0.0-10154-gd2422b0b-1focal python3-rados=17.0.0-10154-gd2422b0b-1focal python3-rgw=17.0.0-10154-gd2422b0b-1focal python3-cephfs=17.0.0-10154-gd2422b0b-1focal python3-rbd=17.0.0-10154-gd2422b0b-1focal libcephfs2=17.0.0-10154-gd2422b0b-1focal libcephfs-dev=17.0.0-10154-gd2422b0b-1focal librados2=17.0.0-10154-gd2422b0b-1focal librbd1=17.0.0-10154-gd2422b0b-1focal rbd-fuse=17.0.0-10154-gd2422b0b-1focal python3-cephfs=17.0.0-10154-gd2422b0b-1focal cephfs-shell=17.0.0-10154-gd2422b0b-1focal cephfs-top=17.0.0-10154-gd2422b0b-1focal cephfs-mirror=17.0.0-10154-gd2422b0b-1focal'

fail 6623717 2022-01-18 05:18:58 2022-01-18 06:33:44 2022-01-18 06:57:48 0:24:04 0:11:57 0:12:07 smithi master rhel 8.4 fs/permission/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{rhel_8} mount/fuse objectstore-ec/bluestore-comp overrides/{whitelist_health whitelist_wrongly_marked_down} tasks/cfuse_workunit_suites_pjd} 2
Failure Reason:

Command failed on smithi040 with status 1: 'sudo yum -y install ceph-radosgw ceph-test ceph ceph-base cephadm ceph-immutable-object-cache ceph-mgr ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-rook ceph-mgr-cephadm ceph-fuse ceph-volume librados-devel libcephfs2 libcephfs-devel librados2 librbd1 python3-rados python3-rgw python3-cephfs python3-rbd rbd-fuse rbd-mirror rbd-nbd python3-cephfs cephfs-top cephfs-mirror bison flex elfutils-libelf-devel openssl-devel NetworkManager iproute util-linux libacl-devel libaio-devel libattr-devel libtool libuuid-devel xfsdump xfsprogs xfsprogs-devel libaio-devel libtool libuuid-devel xfsprogs-devel bison flex elfutils-libelf-devel openssl-devel NetworkManager iproute util-linux libacl-devel libaio-devel libattr-devel libtool libuuid-devel xfsdump xfsprogs xfsprogs-devel libaio-devel libtool libuuid-devel xfsprogs-devel bison flex elfutils-libelf-devel openssl-devel NetworkManager iproute util-linux libacl-devel libaio-devel libattr-devel libtool libuuid-devel xfsdump xfsprogs xfsprogs-devel libaio-devel libtool libuuid-devel xfsprogs-devel bison flex elfutils-libelf-devel openssl-devel NetworkManager iproute util-linux libacl-devel libaio-devel libattr-devel libtool libuuid-devel xfsdump xfsprogs xfsprogs-devel libaio-devel libtool libuuid-devel xfsprogs-devel'

fail 6623610 2022-01-18 05:18:03 2022-01-18 06:09:44 2022-01-18 06:34:07 0:24:23 0:12:04 0:12:19 smithi master rhel 8.4 fs/thrash/workloads/{begin clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/fuse msgr-failures/osd-mds-delay objectstore-ec/bluestore-ec-root overrides/{frag prefetch_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/5 tasks/{1-thrash/mds 2-workunit/suites/ffsb}} 2
Failure Reason:

Command failed on smithi023 with status 1: 'sudo yum -y install ceph-radosgw ceph-test ceph ceph-base cephadm ceph-immutable-object-cache ceph-mgr ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-rook ceph-mgr-cephadm ceph-fuse ceph-volume librados-devel libcephfs2 libcephfs-devel librados2 librbd1 python3-rados python3-rgw python3-cephfs python3-rbd rbd-fuse rbd-mirror rbd-nbd python3-cephfs cephfs-top cephfs-mirror bison flex elfutils-libelf-devel openssl-devel NetworkManager iproute util-linux libacl-devel libaio-devel libattr-devel libtool libuuid-devel xfsdump xfsprogs xfsprogs-devel libaio-devel libtool libuuid-devel xfsprogs-devel bison flex elfutils-libelf-devel openssl-devel NetworkManager iproute util-linux libacl-devel libaio-devel libattr-devel libtool libuuid-devel xfsdump xfsprogs xfsprogs-devel libaio-devel libtool libuuid-devel xfsprogs-devel bison flex elfutils-libelf-devel openssl-devel NetworkManager iproute util-linux libacl-devel libaio-devel libattr-devel libtool libuuid-devel xfsdump xfsprogs xfsprogs-devel libaio-devel libtool libuuid-devel xfsprogs-devel bison flex elfutils-libelf-devel openssl-devel NetworkManager iproute util-linux libacl-devel libaio-devel libattr-devel libtool libuuid-devel xfsdump xfsprogs xfsprogs-devel libaio-devel libtool libuuid-devel xfsprogs-devel'

running 6623233 2022-01-18 04:20:43 2022-01-18 09:25:18 2022-01-18 09:44:58 0:20:03 smithi master ubuntu 20.04 fs/multiclient/{begin clusters/1-mds-3-client conf/{client mds mon osd} distros/ubuntu_latest mount/fuse objectstore-ec/bluestore-comp overrides/{whitelist_health whitelist_wrongly_marked_down} tasks/ior-shared-file} 5
pass 6623197 2022-01-18 04:20:14 2022-01-18 09:01:12 2022-01-18 09:26:42 0:25:30 0:14:37 0:10:53 smithi master centos 8.2 fs/thrash/workloads/{begin clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount overrides/{distro/testing/{flavor/centos_latest k-testing} ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-bitmap overrides/{frag races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/osd 2-workunit/fs/trivial_sync}} 2
pass 6623173 2022-01-18 04:19:54 2022-01-18 08:18:55 2022-01-18 08:46:47 0:27:52 0:20:47 0:07:05 smithi master rhel 8.4 fs/workload/{begin clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/1 scrub/yes standby-replay tasks/{0-check-counter workunit/suites/fsx} wsync/{yes}} 3
pass 6623151 2022-01-18 04:19:34 2022-01-18 07:51:13 2022-01-18 08:19:02 0:27:49 0:16:46 0:11:03 smithi master rhel 8.4 fs/workload/{begin clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 scrub/yes standby-replay tasks/{0-check-counter workunit/direct_io} wsync/{no}} 3
fail 6623112 2022-01-18 04:18:57 2022-01-18 06:57:30 2022-01-18 07:55:28 0:57:58 0:46:59 0:10:59 smithi master centos 8.2 fs/workload/{begin clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/testing/{flavor/centos_latest k-testing} ms-die-on-skipped}} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 scrub/yes standby-replay tasks/{0-check-counter workunit/kernel_untar_build} wsync/{yes}} 3
Failure Reason:

"2022-01-18T07:37:40.148946+0000 mds.d (mds.1) 4242 : cluster [WRN] Scrub error on inode 0x1000000dd71 (/client.0/tmp/t/linux-5.4/samples/Kconfig) see mds.d log and `damage ls` output for details" in cluster log

pass 6622776 2022-01-18 03:17:14 2022-01-18 03:49:24 2022-01-18 06:10:17 2:20:53 2:12:09 0:08:44 smithi master rhel 8.4 fs/workload/{begin clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/{secure} objectstore-ec/bluestore-comp omap_limit/10 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 scrub/no standby-replay tasks/{0-check-counter workunit/suites/ffsb} wsync/{no}} 3
pass 6622690 2022-01-18 03:16:08 2022-01-18 03:16:51 2022-01-18 03:50:17 0:33:26 0:23:52 0:09:34 smithi master rhel 8.4 fs/functional/{begin clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{rhel_8} mount/fuse objectstore/bluestore-bitmap overrides/{no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/scrub} 2
dead 6622664 2022-01-17 20:32:51 2022-01-17 20:34:43 2022-01-18 03:14:57 6:40:14 smithi master ubuntu 20.04 rados:cephadm:smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/nfs-ingress 3-final} 2
Failure Reason:

hit max job timeout

fail 6622623 2022-01-17 20:17:27 2022-01-17 20:17:54 2022-01-18 09:39:05 13:21:11 1:14:13 12:06:58 smithi master ubuntu 20.04 rados:cephadm/smoke/{0-distro/ubuntu_20.04 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} 2
Failure Reason:

Command failed on smithi063 with status 251: 'sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1 && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/attr_allow_any_host && sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1 && echo -n /dev/vg_nvme/lv_1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/device_path && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/enable && sudo ln -s /sys/kernel/config/nvmet/subsystems/lv_1 /sys/kernel/config/nvmet/ports/1/subsystems/lv_1 && sudo nvme connect -t loop -n lv_1 -q hostnqn'

pass 6622518 2022-01-17 17:07:26 2022-01-17 17:25:54 2022-01-17 18:39:21 1:13:27 1:01:34 0:11:53 smithi master centos 8.3 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-bitmap rados tasks/rados_api_tests validater/valgrind} 2
fail 6622461 2022-01-17 16:30:08 2022-01-17 16:41:56 2022-01-17 17:27:42 0:45:46 0:31:29 0:14:17 smithi master centos 8.3 rgw:verify/{0-install centos_latest clusters/fixed-2 datacache/no_datacache frontend/beast ignore-pg-availability msgr-failures/few objectstore/filestore-xfs overrides proto/http rgw_pool_type/replicated sharding$/{default} striping$/{stripe-greater-than-chunk} tasks/{cls ragweed reshard s3tests-java s3tests} validater/lockdep} 2
Failure Reason:

Command failed (s3 tests against rgw) on smithi023 with status 1: "S3TEST_CONF=/home/ubuntu/cephtest/archive/s3-tests.client.0.conf BOTO_CONFIG=/home/ubuntu/cephtest/boto.cfg REQUESTS_CA_BUNDLE=/etc/pki/tls/certs/ca-bundle.crt /home/ubuntu/cephtest/s3-tests/virtualenv/bin/python -m nose -w /home/ubuntu/cephtest/s3-tests -v -a '!fails_on_rgw,!lifecycle_expiration,!fails_strict_rfc2616,!test_of_sts,!webidentity_test,!fails_with_subdomain'"

fail 6622437 2022-01-17 14:57:16 2022-01-17 18:39:10 2022-01-17 18:59:42 0:20:32 0:09:34 0:10:58 smithi master ubuntu 20.04 rbd/mirror-thrash/{base/install clients/mirror cluster/{2-node openstack} msgr-failures/few objectstore/filestore-xfs policy/none rbd-mirror/four-per-cluster supported-random-distro$/{ubuntu_latest} workloads/rbd-mirror-journal-workunit} 2
Failure Reason:

Command failed (workunit test rbd/rbd_mirror_journal.sh) on smithi058 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.cluster1.mirror/client.mirror/tmp && cd -- /home/ubuntu/cephtest/mnt.cluster1.mirror/client.mirror/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=d238de862197118eb4dfe9e422168d942c06f08c TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster cluster1" CEPH_ID="mirror" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.cluster1.client.mirror CEPH_ROOT=/home/ubuntu/cephtest/clone.cluster1.client.mirror CEPH_MNT=/home/ubuntu/cephtest/mnt.cluster1.mirror CEPH_ARGS=\'\' RBD_MIRROR_INSTANCES=4 RBD_MIRROR_USE_EXISTING_CLUSTER=1 RBD_MIRROR_USE_RBD_MIRROR=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.cluster1.client.mirror/qa/workunits/rbd/rbd_mirror_journal.sh'

fail 6622416 2022-01-17 14:56:55 2022-01-17 16:23:22 2022-01-17 16:46:01 0:22:39 0:11:47 0:10:52 smithi master centos 8.stream rbd/mirror/{base/install clients/{mirror-extra mirror} cluster/{2-node openstack} msgr-failures/few objectstore/bluestore-comp-snappy supported-random-distro$/{centos_8.stream} workloads/rbd-mirror-ha-workunit} 2
Failure Reason:

Command failed (workunit test rbd/rbd_mirror_ha.sh) on smithi058 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.cluster1.mirror/client.mirror/tmp && cd -- /home/ubuntu/cephtest/mnt.cluster1.mirror/client.mirror/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=d238de862197118eb4dfe9e422168d942c06f08c TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster cluster1" CEPH_ID="mirror" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.cluster1.client.mirror CEPH_ROOT=/home/ubuntu/cephtest/clone.cluster1.client.mirror CEPH_MNT=/home/ubuntu/cephtest/mnt.cluster1.mirror CEPH_ARGS=\'\' RBD_MIRROR_USE_EXISTING_CLUSTER=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.cluster1.client.mirror/qa/workunits/rbd/rbd_mirror_ha.sh'

fail 6622379 2022-01-17 14:56:16 2022-01-17 16:02:16 2022-01-17 16:24:28 0:22:12 0:11:01 0:11:11 smithi master ubuntu 20.04 rbd/mirror-thrash/{base/install clients/mirror cluster/{2-node openstack} msgr-failures/few objectstore/bluestore-stupid policy/none rbd-mirror/four-per-cluster supported-random-distro$/{ubuntu_latest} workloads/rbd-mirror-journal-workunit} 2
Failure Reason:

Command failed (workunit test rbd/rbd_mirror_journal.sh) on smithi058 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.cluster1.mirror/client.mirror/tmp && cd -- /home/ubuntu/cephtest/mnt.cluster1.mirror/client.mirror/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=d238de862197118eb4dfe9e422168d942c06f08c TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster cluster1" CEPH_ID="mirror" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.cluster1.client.mirror CEPH_ROOT=/home/ubuntu/cephtest/clone.cluster1.client.mirror CEPH_MNT=/home/ubuntu/cephtest/mnt.cluster1.mirror CEPH_ARGS=\'\' RBD_MIRROR_INSTANCES=4 RBD_MIRROR_USE_EXISTING_CLUSTER=1 RBD_MIRROR_USE_RBD_MIRROR=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.cluster1.client.mirror/qa/workunits/rbd/rbd_mirror_journal.sh'

pass 6622303 2022-01-17 13:43:51 2022-01-17 14:53:55 2022-01-17 15:40:26 0:46:31 0:35:56 0:10:35 smithi master centos 8.3 rados:cephadm/with-work/{0-distro/centos_8.3_container_tools_3.0 fixed-2 mode/packaged mon_election/classic msgr/async start tasks/rados_api_tests} 2
pass 6622245 2022-01-17 13:41:05 2022-01-17 14:26:07 2022-01-17 14:54:40 0:28:33 0:19:50 0:08:43 smithi master rhel 8.4 rados:cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_3.0 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-user 3-final} 2
dead 6622204 2022-01-17 13:40:25 2022-01-17 14:06:31 2022-01-17 14:27:05 0:20:34 smithi master ubuntu 18.04 rados:cephadm/smoke/{0-nvme-loop distro/ubuntu_18.04 fixed-2 mon_election/connectivity start} 2
Failure Reason:

SSH connection to smithi058 was lost: 'uname -r'