Name Machine Type Up Locked Locked Since Locked By OS Type OS Version Arch Description
smithi008.front.sepia.ceph.com smithi True True 2021-07-25 02:46:07.094539 scheduled_kchai@teuthology centos 8 x86_64 /home/teuthworker/archive/kchai-2021-07-25_02:43:47-rados-wip-kefu-testing-2021-07-24-2153-distro-basic-smithi/6290689
Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
dead 6290689 2021-07-25 02:46:06 2021-07-25 02:46:07 2021-07-25 03:10:29 0:24:22 smithi master centos 8.3 rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/nautilus backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{centos_latest} mon_election/classic msgr-failures/few rados thrashers/careful thrashosds-health workloads/radosbench} 3
pass 6290583 2021-07-24 16:36:55 2021-07-24 21:13:46 2021-07-24 21:35:53 0:22:07 0:11:41 0:10:26 smithi master centos 8.3 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-hybrid rados tasks/mon_recovery validater/lockdep} 2
pass 6290499 2021-07-24 16:35:15 2021-07-24 20:33:28 2021-07-24 21:14:44 0:41:16 0:31:29 0:09:47 smithi master ubuntu 20.04 rados/perf/{ceph mon_election/connectivity objectstore/bluestore-stupid openstack scheduler/dmclock_default_shards settings/optimized ubuntu_latest workloads/radosbench_omap_write} 1
pass 6290225 2021-07-24 16:14:34 2021-07-24 16:44:16 2021-07-24 20:33:18 3:49:02 3:34:37 0:14:25 smithi master ubuntu 20.04 upgrade/octopus-x/stress-split-erasure-code-no-cephadm/{0-cluster/{openstack start} 1-nautilus-install/octopus 1.1-pg-log-overrides/short_pg_log 2-partial-upgrade/firsthalf 3-thrash/default 3.1-objectstore/bluestore-bitmap 4-ec-workload/{rados-ec-workload rbd-ec-workload} 5-finish-upgrade 6-pacific 7-final-workload mon_election/classic thrashosds-health ubuntu_20.04} 5
fail 6290202 2021-07-24 16:14:00 2021-07-24 16:14:26 2021-07-24 16:45:13 0:30:47 0:15:59 0:14:48 smithi master ubuntu 20.04 upgrade/pacific-x/stress-split/{0-distro/ubuntu_20.04 0-roles 1-start 2-first-half-tasks/rbd-cls 3-stress-tasks/{radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 4-second-half-tasks/rbd-import-export mon_election/classic} 2
Failure Reason:

Command failed on smithi008 with status 13: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph --cluster ceph osd dump --format=json'

fail 6290144 2021-07-24 14:23:41 2021-07-24 14:53:33 2021-07-24 15:13:35 0:20:02 0:08:31 0:11:31 smithi master ubuntu 18.04 upgrade:nautilus-x/parallel/{0-cluster/{openstack start} 1-ceph-install/nautilus 1.1-pg-log-overrides/short_pg_log 2-workload/{blogbench ec-rados-default rados_api rados_loadgenbig rgw_ragweed_prepare test_rbd_api test_rbd_python} 3-upgrade-sequence/upgrade-mon-osd-mds 4-octopus 5-final-workload/{blogbench rados-snaps-few-objects rados_loadgenmix rados_mon_thrash rbd_cls rbd_import_export rgw rgw_ragweed_check} mon_election/classic objectstore/bluestore-bitmap ubuntu_18.04} 4
Failure Reason:

Command failed on smithi008 with status 100: 'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=16.2.5-94-g8e0e9efa-1bionic cephadm=16.2.5-94-g8e0e9efa-1bionic ceph-mds=16.2.5-94-g8e0e9efa-1bionic ceph-mgr=16.2.5-94-g8e0e9efa-1bionic ceph-common=16.2.5-94-g8e0e9efa-1bionic ceph-fuse=16.2.5-94-g8e0e9efa-1bionic ceph-test=16.2.5-94-g8e0e9efa-1bionic radosgw=16.2.5-94-g8e0e9efa-1bionic python3-rados=16.2.5-94-g8e0e9efa-1bionic python3-rgw=16.2.5-94-g8e0e9efa-1bionic python3-cephfs=16.2.5-94-g8e0e9efa-1bionic python3-rbd=16.2.5-94-g8e0e9efa-1bionic libcephfs2=16.2.5-94-g8e0e9efa-1bionic libcephfs-dev=16.2.5-94-g8e0e9efa-1bionic librados2=16.2.5-94-g8e0e9efa-1bionic librbd1=16.2.5-94-g8e0e9efa-1bionic rbd-fuse=16.2.5-94-g8e0e9efa-1bionic'

fail 6290107 2021-07-24 14:22:59 2021-07-24 14:23:37 2021-07-24 14:53:52 0:30:15 0:12:10 0:18:05 smithi master ubuntu 18.04 upgrade:octopus-x/stress-split-erasure-code-no-cephadm/{0-cluster/{openstack start} 1-nautilus-install/octopus 1.1-pg-log-overrides/short_pg_log 2-partial-upgrade/firsthalf 3-thrash/default 3.1-objectstore/filestore-xfs 4-ec-workload/{rados-ec-workload rbd-ec-workload} 5-finish-upgrade 6-pacific 7-final-workload mon_election/classic thrashosds-health ubuntu_18.04} 5
Failure Reason:

Command failed on smithi008 with status 100: 'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=16.2.5-94-g8e0e9efa-1bionic cephadm=16.2.5-94-g8e0e9efa-1bionic ceph-mds=16.2.5-94-g8e0e9efa-1bionic ceph-mgr=16.2.5-94-g8e0e9efa-1bionic ceph-common=16.2.5-94-g8e0e9efa-1bionic ceph-fuse=16.2.5-94-g8e0e9efa-1bionic ceph-test=16.2.5-94-g8e0e9efa-1bionic radosgw=16.2.5-94-g8e0e9efa-1bionic python3-rados=16.2.5-94-g8e0e9efa-1bionic python3-rgw=16.2.5-94-g8e0e9efa-1bionic python3-cephfs=16.2.5-94-g8e0e9efa-1bionic python3-rbd=16.2.5-94-g8e0e9efa-1bionic libcephfs2=16.2.5-94-g8e0e9efa-1bionic libcephfs-dev=16.2.5-94-g8e0e9efa-1bionic librados2=16.2.5-94-g8e0e9efa-1bionic librbd1=16.2.5-94-g8e0e9efa-1bionic rbd-fuse=16.2.5-94-g8e0e9efa-1bionic'

pass 6290053 2021-07-24 11:01:09 2021-07-24 11:01:44 2021-07-24 11:24:35 0:22:51 0:10:16 0:12:35 smithi master ubuntu 20.04 rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/fastclose objectstore/bluestore-comp-zlib rados recovery-overrides/{more-active-recovery} supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/ec-rados-plugin=lrc-k=4-m=2-l=3} 3
pass 6289989 2021-07-24 05:57:27 2021-07-24 08:29:12 2021-07-24 10:38:01 2:08:49 1:45:12 0:23:37 smithi master centos 8.3 upgrade/octopus-x/parallel/{0-distro/centos_8.3_kubic_stable 0-start 1-tasks mon_election/classic upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api}} 2
fail 6289964 2021-07-24 05:57:02 2021-07-24 08:06:51 2021-07-24 08:29:10 0:22:19 0:04:39 0:17:40 smithi master ubuntu 20.04 upgrade/octopus-x/stress-split-erasure-code-no-cephadm/{0-cluster/{openstack start} 1-nautilus-install/octopus 1.1-pg-log-overrides/short_pg_log 2-partial-upgrade/firsthalf 3-thrash/default 3.1-objectstore/filestore-xfs 4-ec-workload/{rados-ec-workload rbd-ec-workload} 5-finish-upgrade 6-pacific 7-final-workload mon_election/classic thrashosds-health ubuntu_20.04} 5
Failure Reason:

Command failed on smithi008 with status 100: 'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=15.2.13-209-g02dd0874-1focal ceph-mds=15.2.13-209-g02dd0874-1focal ceph-mgr=15.2.13-209-g02dd0874-1focal ceph-common=15.2.13-209-g02dd0874-1focal ceph-fuse=15.2.13-209-g02dd0874-1focal ceph-test=15.2.13-209-g02dd0874-1focal radosgw=15.2.13-209-g02dd0874-1focal python3-rados=15.2.13-209-g02dd0874-1focal python3-rgw=15.2.13-209-g02dd0874-1focal python3-cephfs=15.2.13-209-g02dd0874-1focal python3-rbd=15.2.13-209-g02dd0874-1focal libcephfs2=15.2.13-209-g02dd0874-1focal librados2=15.2.13-209-g02dd0874-1focal librbd1=15.2.13-209-g02dd0874-1focal rbd-fuse=15.2.13-209-g02dd0874-1focal'

pass 6289936 2021-07-24 05:29:03 2021-07-24 07:51:45 2021-07-24 08:08:18 0:16:33 0:08:33 0:08:00 smithi master centos 8.stream rados/objectstore/{backends/fusestore supported-random-distro$/{centos_8.stream}} 1
pass 6289893 2021-07-24 05:28:26 2021-07-24 07:29:30 2021-07-24 07:49:41 0:20:11 0:06:50 0:13:21 smithi master ubuntu 20.04 rados/multimon/{clusters/9 mon_election/classic msgr-failures/many msgr/async-v1only no_pools objectstore/bluestore-comp-zlib rados supported-random-distro$/{ubuntu_latest} tasks/mon_clock_with_skews} 3
pass 6289829 2021-07-24 05:27:30 2021-07-24 07:01:18 2021-07-24 07:30:18 0:29:00 0:19:10 0:09:50 smithi master centos 8.3 rados/monthrash/{ceph clusters/9-mons mon_election/classic msgr-failures/mon-delay msgr/async-v2only objectstore/bluestore-hybrid rados supported-random-distro$/{centos_8} thrashers/sync-many workloads/pool-create-delete} 2
pass 6289788 2021-07-24 05:26:54 2021-07-24 06:37:19 2021-07-24 07:00:53 0:23:34 0:17:11 0:06:23 smithi master rhel 8.4 rados/multimon/{clusters/21 mon_election/connectivity msgr-failures/few msgr/async-v1only no_pools objectstore/bluestore-bitmap rados supported-random-distro$/{rhel_8} tasks/mon_clock_with_skews} 3
pass 6289748 2021-07-24 05:26:20 2021-07-24 06:18:55 2021-07-24 06:37:51 0:18:56 0:08:58 0:09:58 smithi master centos 8.3 rados/multimon/{clusters/9 mon_election/classic msgr-failures/many msgr/async no_pools objectstore/filestore-xfs rados supported-random-distro$/{centos_8} tasks/mon_clock_with_skews} 3
pass 6289693 2021-07-24 05:25:32 2021-07-24 05:53:44 2021-07-24 06:18:45 0:25:01 0:15:49 0:09:12 smithi master centos 8.3 rados/cephadm/smoke/{distro/centos_8.3_kubic_stable fixed-2 mon_election/connectivity start} 2
pass 6289654 2021-07-24 05:24:50 2021-07-24 05:25:06 2021-07-24 05:53:34 0:28:28 0:11:43 0:16:45 smithi master ubuntu 20.04 rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/osd-delay objectstore/bluestore-comp-lz4 rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4
dead 6289240 2021-07-24 03:39:10 2021-07-24 03:39:10 2021-07-24 04:00:14 0:21:04 smithi master rhel 8.3 rados/cephadm/smoke-singlehost/{0-distro$/{rhel_8.3_kubic_stable} 1-start 2-services/basic 3-final} 1
pass 6289171 2021-07-23 17:41:40 2021-07-23 19:35:54 2021-07-23 20:52:10 1:16:16 1:05:31 0:10:45 smithi master ubuntu 18.04 rados/dashboard/{clusters/{2-node-mgr} debug/mgr objectstore/bluestore-low-osd-mem-target supported-random-distro$/{ubuntu_18.04} tasks/dashboard} 2
pass 6289111 2021-07-23 17:40:38 2021-07-23 19:08:20 2021-07-23 19:36:35 0:28:15 0:17:01 0:11:14 smithi master centos 8.2 rados/cephadm/upgrade/{1-start-distro/1-start-centos_8 2-repo_digest/repo_digest 3-start-upgrade 4-wait fixed-2} 2