Name Machine Type Up Locked Locked Since Locked By OS Type OS Version Arch Description
smithi065.front.sepia.ceph.com smithi True True 2020-06-06 13:21:42.598869 scheduled_teuthology@teuthology rhel 7.8 x86_64 /home/teuthworker/archive/teuthology-2020-06-06_05:07:02-powercycle-nautilus-distro-basic-smithi/5121392
Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 5122386 2020-06-06 08:00:45 2020-06-06 10:40:19 2020-06-06 11:06:18 0:25:59 0:12:13 0:13:46 smithi py2 ubuntu 18.04 smoke/basic/{clusters/{fixed-3-cephfs openstack} objectstore/bluestore-bitmap supported-random-distro$/{ubuntu_latest} tasks/rbd_fsx} 3
pass 5122369 2020-06-06 08:00:30 2020-06-06 10:28:39 2020-06-06 11:34:39 1:06:00 0:17:11 0:48:49 smithi py2 centos 8.1 smoke/basic/{clusters/{fixed-3-cephfs openstack} objectstore/bluestore-bitmap supported-random-distro$/{centos_latest} tasks/cfuse_workunit_suites_iozone} 3
pass 5122302 2020-06-06 07:04:36 2020-06-06 09:54:13 2020-06-06 10:42:13 0:48:00 0:41:36 0:06:24 smithi master centos 8.1 rbd/valgrind/{base/install centos_latest clusters/{fixed-1 openstack} objectstore/bluestore-comp-lz4 validator/memcheck workloads/c_api_tests} 1
pass 5122271 2020-06-06 07:04:09 2020-06-06 09:40:00 2020-06-06 09:56:00 0:16:00 0:09:15 0:06:45 smithi master centos 8.1 rbd/basic/{base/install cachepool/small clusters/{fixed-1 openstack} msgr-failures/few objectstore/bluestore-avl supported-random-distro$/{centos_latest} tasks/rbd_lock_and_fence} 1
pass 5122149 2020-06-06 06:56:12 2020-06-06 09:05:55 2020-06-06 09:41:55 0:36:00 0:17:28 0:18:32 smithi master ubuntu 18.04 fs/traceless/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-comp-ec-root overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{ubuntu_latest} tasks/cfuse_workunit_suites_fsstress traceless/50pc} 2
pass 5122133 2020-06-06 06:55:57 2020-06-06 08:57:48 2020-06-06 09:13:48 0:16:00 0:10:31 0:05:29 smithi master centos 8.1 fs/basic_functional/{begin clusters/1-mds-4-client-coloc conf/{client mds mon osd} mount/fuse objectstore/bluestore-ec-root overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{centos_latest} tasks/alternate-pool} 2
fail 5122094 2020-06-06 06:55:21 2020-06-06 08:41:33 2020-06-06 08:59:32 0:17:59 0:10:07 0:07:52 smithi master centos 8.1 fs/basic_functional/{begin clusters/1-mds-4-client-coloc conf/{client mds mon osd} mount/fuse objectstore/bluestore-ec-root overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{centos_latest} tasks/journal-repair} 2
Failure Reason:

Test failure: test_inject_to_empty (tasks.cephfs.test_journal_repair.TestJournalRepair)

pass 5122014 2020-06-06 06:51:56 2020-06-06 08:04:06 2020-06-06 08:44:06 0:40:00 0:19:51 0:20:09 smithi py2 fs/upgrade/featureful_client/upgraded_client/{bluestore-bitmap clusters/1-mds-2-client-micro conf/{client mds mon osd} overrides/{frag_enable multimds/yes pg-warn whitelist_health whitelist_wrongly_marked_down} tasks/{0-mimic 1-client 2-upgrade 3-client-upgrade 4-compat_client 5-client-sanity}} 3
running 5121392 2020-06-06 05:08:54 2020-06-06 12:48:56 2020-06-06 13:42:56 0:55:23 smithi master rhel 7.8 powercycle/osd/{clusters/3osd-1per-target objectstore/bluestore-comp-zlib powercycle/default supported-all-distro/rhel_7 tasks/rados_api_tests thrashosds-health whitelist_health} 4
pass 5121347 2020-06-06 05:08:34 2020-06-06 12:02:39 2020-06-06 13:20:40 1:18:01 0:35:18 0:42:43 smithi py2 rhel 7.5 powercycle/osd/{clusters/3osd-1per-target objectstore/bluestore-bitmap powercycle/default supported-all-distro/rhel_7.5 tasks/cfuse_workunit_suites_ffsb thrashosds-health whitelist_health} 4
pass 5121336 2020-06-06 05:08:30 2020-06-06 11:50:32 2020-06-06 12:40:32 0:50:00 0:31:47 0:18:13 smithi master rhel 7.8 powercycle/osd/{clusters/3osd-1per-target objectstore/bluestore-stupid powercycle/default supported-all-distro/rhel_7 tasks/rados_api_tests thrashosds-health whitelist_health} 4
pass 5121312 2020-06-06 05:08:19 2020-06-06 11:34:34 2020-06-06 12:02:34 0:28:00 0:19:19 0:08:41 smithi master rhel 7.8 powercycle/osd/{clusters/3osd-1per-target objectstore/bluestore-comp-lz4 powercycle/default supported-all-distro/rhel_7 tasks/readwrite thrashosds-health whitelist_health} 4
fail 5120826 2020-06-06 00:29:06 2020-06-06 01:25:03 2020-06-06 08:13:13 6:48:10 6:41:56 0:06:14 smithi master centos 8.1 rados/verify/{centos_latest.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/none.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml tasks/rados_api_tests.yaml validater/valgrind.yaml} 2
Failure Reason:

Command failed (workunit test rados/test.sh) on smithi065 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=754623def48ccfd99f69f982a8f1c8c76e9b3fff TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 ALLOW_TIMEOUTS=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh'

pass 5120736 2020-06-06 00:27:56 2020-06-06 00:54:49 2020-06-06 01:24:49 0:30:00 0:20:20 0:09:40 smithi master ubuntu 18.04 rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/fast.yaml msgr-failures/osd-delay.yaml objectstore/bluestore-hybrid.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/fastread.yaml thrashosds-health.yaml workloads/ec-small-objects-many-deletes.yaml} 2
pass 5120675 2020-06-06 00:27:08 2020-06-06 00:27:09 2020-06-06 00:55:09 0:28:00 0:12:04 0:15:56 smithi master ubuntu 18.04 rados/thrash-erasure-code-shec/{ceph.yaml clusters/{fixed-4.yaml openstack.yaml} msgr-failures/fastclose.yaml objectstore/bluestore-comp-zstd.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/ec-rados-plugin=shec-k=4-m=3-c=2.yaml} 4
fail 5120430 2020-06-05 16:40:10 2020-06-05 16:40:28 2020-06-05 21:34:35 4:54:07 3:48:19 1:05:48 smithi master ubuntu 18.04 rados/upgrade/nautilus-x-singleton/{0-cluster/{openstack start} 1-install/nautilus 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{rbd-cls rbd-import-export readwrite snaps-few-objects} 5-workload/{radosbench rbd_api} 6-finish-upgrade 7-octopus 8-workload/{rbd-python snaps-many-objects} bluestore-bitmap thrashosds-health ubuntu_latest} 4
Failure Reason:

"2020-06-05T20:27:42.076689+0000 osd.11 (osd.11) 68 : cluster [ERR] 5.8 deep-scrub : stat mismatch, got 4/4 objects, 0/0 clones, 0/0 dirty, 0/0 omap, 0/0 pinned, 4/4 hit_set_archive, 0/0 whiteouts, 6718/0 bytes, 0/0 manifest objects, 6718/6718 hit_set_archive bytes." in cluster log

pass 5119797 2020-06-05 15:38:05 2020-06-05 17:15:48 2020-06-05 17:35:47 0:19:59 0:10:59 0:09:00 smithi py2 ubuntu 18.04 rados/perf/{ceph objectstore/bluestore-comp openstack settings/optimized ubuntu_latest workloads/radosbench_4K_seq_read} 1
pass 5119637 2020-06-05 14:24:58 2020-06-05 14:29:48 2020-06-05 17:15:51 2:46:03 2:25:38 0:20:25 smithi py2 ubuntu 18.04 upgrade:nautilus-x/parallel/{0-cluster/{openstack start} 1-ceph-install/nautilus 1.1-pg-log-overrides/normal_pg_log 2-workload/{blogbench ec-rados-default rados_api rados_loadgenbig rgw_ragweed_prepare test_rbd_api test_rbd_python} 3-upgrade-sequence/upgrade-mon-osd-mds 4-octopus 5-final-workload/{blogbench rados-snaps-few-objects rados_loadgenmix rados_mon_thrash rbd_cls rbd_import_export rgw rgw_ragweed_check} objectstore/filestore-xfs ubuntu_latest} 4
fail 5119581 2020-06-05 13:13:22 2020-06-05 13:35:42 2020-06-05 13:57:42 0:22:00 0:13:08 0:08:52 smithi master rhel 8.1 rados:cephadm/with-work/{distro/rhel_latest fixed-2 mode/root msgr/async start tasks/rados_python} 2
Failure Reason:

Command failed on smithi065 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph-ci/ceph:ef66e1bc4d611e10aee43b698f822996673b3fe4 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid bd4aafae-a733-11ea-a06b-001a4aab830c -- ceph orch daemon add osd smithi065:vg_nvme/lv_4'

fail 5119549 2020-06-05 13:10:37 2020-06-05 13:15:34 2020-06-05 13:39:34 0:24:00 0:08:27 0:15:33 smithi master krbd/rbd/{bluestore-bitmap clusters/fixed-3 conf msgr-failures/few tasks/rbd_fio} 3
Failure Reason:

Command failed on smithi137 with status 6: 'sudo rbd device map -o queue_depth=128 i2flayering-exclusive-locksmithi137'