Name Machine Type Up Locked Locked Since Locked By OS Type OS Version Arch Description
smithi172.front.sepia.ceph.com smithi True True 2020-07-07 00:58:34.239525 scheduled_yuriw@teuthology rhel 7.8 x86_64 /home/teuthworker/archive/yuriw-2020-07-06_17:29:30-kcephfs-wip-yuri3-testing-2020-07-01-1707-nautilus-distro-basic-smithi/5203946
Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 5204105 2020-07-06 17:32:46 2020-07-07 00:12:41 2020-07-07 00:38:41 0:26:00 0:17:08 0:08:52 smithi master fs/upgrade/featureful_client/upgraded_client/{bluestore-bitmap clusters/1-mds-2-client-micro conf/{client mds mon osd} overrides/{frag_enable multimds/no whitelist_health whitelist_wrongly_marked_down} tasks/{0-luminous 1-client 2-upgrade 3-client-upgrade 4-compat_client 5-client-sanity}} 3
pass 5204046 2020-07-06 17:31:47 2020-07-06 23:20:55 2020-07-07 00:14:56 0:54:01 0:17:59 0:36:02 smithi master centos 7.8 multimds/basic/{0-supported-random-distro$/{centos_latest} begin clusters/3-mds conf/{client mds mon osd} inline/no mount/fuse objectstore-ec/bluestore-bitmap overrides/{basic/{frag_enable whitelist_health whitelist_wrongly_marked_down} fuse-default-perm-no} q_check_counter/check_counter tasks/cfuse_workunit_suites_pjd} 3
running 5203946 2020-07-06 17:30:17 2020-07-06 21:40:20 2020-07-07 02:46:27 5:08:01 smithi master rhel 7.8 kcephfs/recovery/{begin clusters/1-mds-4-client conf/{client mds mon osd} kclient/{mount overrides/{distro/random/{k-testing supported$/{rhel_latest}} ms-die-on-skipped}} objectstore-ec/bluestore-bitmap overrides/{frag_enable log-config osd-asserts whitelist_health whitelist_wrongly_marked_down} tasks/client-recovery} 6
pass 5203930 2020-07-06 17:30:03 2020-07-06 21:34:23 2020-07-07 01:00:28 3:26:05 0:15:08 3:10:57 smithi master rhel 7.8 kcephfs/recovery/{begin clusters/1-mds-4-client conf/{client mds mon osd} kclient/{mount overrides/{distro/rhel/{k-distro rhel_latest} ms-die-on-skipped}} objectstore-ec/filestore-xfs overrides/{frag_enable log-config osd-asserts whitelist_health whitelist_wrongly_marked_down} tasks/pool-perm} 6
pass 5203881 2020-07-06 17:28:26 2020-07-06 21:10:29 2020-07-06 21:34:29 0:24:00 0:16:50 0:07:10 smithi master rhel 8.1 rados/basic/{ceph clusters/{fixed-2 openstack} msgr-failures/many msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{rhel_latest} tasks/rados_cls_all} 2
pass 5203631 2020-07-06 17:24:56 2020-07-06 19:12:11 2020-07-06 21:12:14 2:00:03 1:43:15 0:16:48 smithi master rhel 8.1 rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal msgr-failures/osd-delay objectstore/bluestore-comp-snappy rados recovery-overrides/{more-active-recovery} supported-random-distro$/{rhel_latest} thrashers/pggrow thrashosds-health workloads/ec-radosbench} 2
pass 5203234 2020-07-06 14:23:40 2020-07-06 14:31:21 2020-07-06 23:47:35 9:16:14 2:06:14 7:10:00 smithi master ubuntu 18.04 upgrade:nautilus-x/stress-split-erasure-code/{0-cluster/{openstack start} 1-nautilus-install/nautilus 1.1-pg-log-overrides/normal_pg_log 2-partial-upgrade/firsthalf 3-thrash/default 3.1-objectstore/filestore-xfs 4-ec-workload/{rados-ec-workload rbd-ec-workload} 5-finish-upgrade 6-octopus 7-final-workload thrashosds-health ubuntu_latest} 5
pass 5203226 2020-07-06 14:23:32 2020-07-06 14:29:15 2020-07-06 19:23:21 4:54:06 3:15:05 1:39:01 smithi master ubuntu 18.04 upgrade:nautilus-x/stress-split-erasure-code/{0-cluster/{openstack start} 1-nautilus-install/nautilus 1.1-pg-log-overrides/short_pg_log 2-partial-upgrade/firsthalf 3-thrash/default 3.1-objectstore/filestore-xfs 4-ec-workload/{rados-ec-workload rbd-ec-workload} 5-finish-upgrade 6-octopus 7-final-workload thrashosds-health ubuntu_latest} 5
pass 5200839 2020-07-06 04:03:48 2020-07-06 04:28:54 2020-07-06 05:08:54 0:40:00 0:28:12 0:11:48 smithi master rhel 8.1 rados/monthrash/{ceph clusters/9-mons msgr-failures/few msgr/async-v2only objectstore/bluestore-hybrid rados supported-random-distro$/{rhel_8} thrashers/sync workloads/rados_api_tests} 2
pass 5200466 2020-07-06 03:30:46 2020-07-06 09:40:05 2020-07-06 16:02:19 6:22:14 5:39:34 0:42:40 smithi master ubuntu 18.04 rados/upgrade/mimic-x-singleton/{0-cluster/{openstack start} 1-install/mimic 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{rbd-cls rbd-import-export readwrite snaps-few-objects} 5-workload/{radosbench rbd_api} 6-finish-upgrade 7-octopus 8-workload/{rbd-python snaps-many-objects} bluestore-bitmap thrashosds-health ubuntu_latest} 4
pass 5200309 2020-07-06 03:09:20 2020-07-06 03:56:34 2020-07-06 04:14:34 0:18:00 0:09:30 0:08:30 smithi master centos 8.1 rgw/singleton/{all/radosgw-admin frontend/civetweb objectstore/filestore-xfs overrides rgw_pool_type/replicated supported-random-distro$/{centos_8}} 2
fail 5200279 2020-07-06 03:08:55 2020-07-06 03:36:50 2020-07-06 04:00:50 0:24:00 0:15:24 0:08:36 smithi master centos 8.0 rgw/verify/{centos_latest clusters/fixed-2 frontend/beast msgr-failures/few objectstore/filestore-xfs overrides proto/https rgw_pool_type/ec sharding$/{single} striping$/{stripe-greater-than-chunk} tasks/{0-install cls ragweed reshard s3tests-java s3tests} validater/valgrind} 2
Failure Reason:

Command failed on smithi172 with status 128: 'rm -rf /home/ubuntu/cephtest/clone.client.0 && git clone git://git.ceph.com/ceph.git /home/ubuntu/cephtest/clone.client.0 && cd /home/ubuntu/cephtest/clone.client.0 && git checkout baa1ea6a9656c3db06c66032fa80b476721947ba'

pass 5200256 2020-07-06 03:08:37 2020-07-06 03:10:52 2020-07-06 04:34:54 1:24:02 0:14:26 1:09:36 smithi master rhel 8.1 rgw/singleton/{all/radosgw-admin frontend/civetweb objectstore/bluestore-bitmap overrides rgw_pool_type/replicated supported-random-distro$/{rhel_8}} 2
fail 5200251 2020-07-06 03:08:33 2020-07-06 03:08:34 2020-07-06 03:38:34 0:30:00 0:15:29 0:14:31 smithi master centos 8.0 rgw/verify/{centos_latest clusters/fixed-2 frontend/civetweb msgr-failures/few objectstore/bluestore-bitmap overrides proto/https rgw_pool_type/ec sharding$/{single} striping$/{stripe-greater-than-chunk} tasks/{0-install cls ragweed reshard s3tests-java s3tests} validater/valgrind} 2
Failure Reason:

Command failed on smithi172 with status 128: 'rm -rf /home/ubuntu/cephtest/clone.client.0 && git clone git://git.ceph.com/ceph.git /home/ubuntu/cephtest/clone.client.0 && cd /home/ubuntu/cephtest/clone.client.0 && git checkout baa1ea6a9656c3db06c66032fa80b476721947ba'

fail 5200203 2020-07-06 02:26:00 2020-07-06 02:36:37 2020-07-06 10:16:49 7:40:12 5:01:01 2:39:11 smithi master ubuntu 18.04 upgrade:luminous-x/stress-split/{0-cluster/{openstack start} 1-ceph-install/luminous 1.1-pg-log-overrides/normal_pg_log 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 5-finish-upgrade 6-msgr2 6-nautilus 7-final-workload/{rbd-python rgw-swift snaps-many-objects} objectstore/bluestore-bitmap supported-all-distro/ubuntu_latest thrashosds-health} 5
Failure Reason:

Command failed (workunit test rbd/test_librbd_python.sh) on smithi172 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=luminous TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/test_librbd_python.sh'

pass 5200122 2020-07-06 02:05:19 2020-07-06 02:24:29 2020-07-06 03:18:29 0:54:00 0:44:40 0:09:20 smithi master centos 8.1 rbd/librbd/{cache/writethrough clusters/{fixed-3 openstack} config/none min-compat-client/octopus msgr-failures/few objectstore/bluestore-stupid pool/ec-data-pool supported-random-distro$/{centos_8} workloads/rbd_fio} 3
fail 5200038 2020-07-06 02:04:10 2020-07-06 02:04:11 2020-07-06 02:28:11 0:24:00 0:14:22 0:09:38 smithi master centos 8.1 rbd/basic/{base/install cachepool/none clusters/{fixed-1 openstack} msgr-failures/few objectstore/bluestore-bitmap supported-random-distro$/{centos_8} tasks/rbd_api_tests_old_format} 1
Failure Reason:

Command failed on smithi172 with status 128: 'rm -rf /home/ubuntu/cephtest/clone.client.0 && git clone git://git.ceph.com/ceph.git /home/ubuntu/cephtest/clone.client.0 && cd /home/ubuntu/cephtest/clone.client.0 && git checkout baa1ea6a9656c3db06c66032fa80b476721947ba'

pass 5199972 2020-07-05 13:25:46 2020-07-05 13:25:47 2020-07-05 14:03:47 0:38:00 0:13:17 0:24:43 smithi master ubuntu 16.04 kcephfs/cephfs/{begin clusters/1-mds-1-client conf/{client mds mon osd} inline/no kclient/{mount overrides/{distro/random/{k-testing supported$/{ubuntu_16.04}} ms-die-on-skipped}} objectstore-ec/bluestore-comp-ec-root overrides/{frag_enable log-config osd-asserts whitelist_health whitelist_wrongly_marked_down} tasks/kclient_workunit_trivial_sync} 3
pass 5199965 2020-07-05 13:25:39 2020-07-05 13:25:44 2020-07-05 14:23:44 0:58:00 0:14:59 0:43:01 smithi master rhel 7.8 kcephfs/recovery/{begin clusters/1-mds-4-client conf/{client mds mon osd} kclient/{mount overrides/{distro/rhel/{k-distro rhel_latest} ms-die-on-skipped}} objectstore-ec/bluestore-ec-root overrides/{frag_enable log-config osd-asserts whitelist_health whitelist_wrongly_marked_down} tasks/pool-perm} 6
pass 5199956 2020-07-05 13:25:31 2020-07-05 13:25:44 2020-07-05 15:05:45 1:40:01 0:31:08 1:08:53 smithi master ubuntu 16.04 kcephfs/recovery/{begin clusters/1-mds-4-client conf/{client mds mon osd} kclient/{mount overrides/{distro/random/{k-testing supported$/{ubuntu_16.04}} ms-die-on-skipped}} objectstore-ec/bluestore-bitmap overrides/{frag_enable log-config osd-asserts whitelist_health whitelist_wrongly_marked_down} tasks/journal-repair} 6