Name Machine Type Up Locked Locked Since Locked By OS Type OS Version Arch Description
smithi180.front.sepia.ceph.com smithi True False rhel 8.1 x86_64 /home/teuthworker/archive/gkyratsas-2020-07-03_16:56:02-rados:cephadm:-master-distro-basic-smithi/5197565
Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 5197565 2020-07-03 16:56:12 2020-07-03 16:56:21 2020-07-03 17:38:21 0:42:00 0:34:56 0:07:04 smithi master rhel 8.0 rados:cephadm:/with-work/{distro/rhel_8.0 fixed-2 mode/root msgr/async-v1only start tasks/rados_api_tests} 2
pass 5197531 2020-07-03 14:23:32 2020-07-03 14:23:39 2020-07-03 16:55:42 2:32:03 2:17:15 0:14:48 smithi master ubuntu 18.04 upgrade:nautilus-x/stress-split-erasure-code/{0-cluster/{openstack start} 1-nautilus-install/nautilus 1.1-pg-log-overrides/short_pg_log 2-partial-upgrade/firsthalf 3-thrash/default 3.1-objectstore/bluestore-bitmap 4-ec-workload/{rados-ec-workload rbd-ec-workload} 5-finish-upgrade 6-octopus 7-final-workload thrashosds-health ubuntu_latest} 5
pass 5197509 2020-07-03 13:53:53 2020-07-03 13:53:54 2020-07-03 14:15:54 0:22:00 0:14:59 0:07:01 smithi master rhel 8.1 kcephfs/cephfs/{begin clusters/1-mds-1-client conf/{client mds mon osd} inline/no kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore-ec/bluestore-bitmap overrides/{frag_enable log-config osd-asserts whitelist_health whitelist_wrongly_marked_down} tasks/acls-kernel-client} 3
pass 5197497 2020-07-03 13:15:48 2020-07-03 13:15:49 2020-07-03 13:53:49 0:38:00 0:30:03 0:07:57 smithi master rhel 8.1 multimds/thrash/{0-supported-random-distro$/{centos_latest} begin ceph-thrash/mon clusters/3-mds-2-standby conf/{client mds mon osd} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-ec-root overrides/{fuse-default-perm-no thrash/{frag_enable whitelist_health whitelist_wrongly_marked_down} thrash_debug} tasks/cfuse_workunit_suites_fsstress} 3
pass 5197317 2020-07-03 07:08:12 2020-07-03 10:56:16 2020-07-03 11:20:15 0:23:59 0:11:04 0:12:55 smithi master ubuntu 18.04 rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} msgr-failures/fastclose objectstore/bluestore-comp-zlib rados recovery-overrides/{more-async-recovery} supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=lrc-k=4-m=2-l=3} 3
pass 5197269 2020-07-03 07:07:30 2020-07-03 10:39:59 2020-07-03 10:59:58 0:19:59 0:12:48 0:07:11 smithi master rhel 8.1 rados/cephadm/smoke-roleless/{distro/rhel_latest start} 2
pass 5197166 2020-07-03 07:06:02 2020-07-03 10:00:01 2020-07-03 10:42:01 0:42:00 0:30:06 0:11:54 smithi master rhel 8.1 rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} msgr-failures/few objectstore/bluestore-stupid rados recovery-overrides/{more-active-recovery} supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} 3
pass 5197026 2020-07-03 07:04:06 2020-07-03 07:51:35 2020-07-03 10:05:38 2:14:03 2:01:50 0:12:13 smithi master centos 7.6 rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/mimic backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{centos_7.6} msgr-failures/fastclose rados thrashers/mapgap thrashosds-health workloads/radosbench} 3
pass 5196992 2020-07-03 07:00:27 2020-07-03 07:01:20 2020-07-03 07:53:20 0:52:00 0:15:54 0:36:06 smithi master smoke/basic/{clusters/{fixed-3-cephfs openstack} objectstore/bluestore-bitmap tasks/cfuse_workunit_suites_fsstress} 3
fail 5196949 2020-07-03 05:18:05 2020-07-03 06:37:34 2020-07-03 07:27:34 0:50:00 0:35:08 0:14:52 smithi master krbd/fsx/{ceph/ceph clusters/3-node conf features/object-map objectstore/bluestore-bitmap striping/default/{msgr-failures/many randomized-striping-off} tasks/fsx-1-client} 3
Failure Reason:

"2020-07-03T07:02:45.062417+0000 mon.a (mon.0) 183 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,c (MON_DOWN)" in cluster log

pass 5196878 2020-07-03 05:17:08 2020-07-03 06:05:07 2020-07-03 06:41:07 0:36:00 0:14:56 0:21:04 smithi master krbd/rbd-nomount/{bluestore-bitmap clusters/fixed-3 conf install/ceph msgr-failures/few tasks/krbd_fallocate} 3
pass 5196817 2020-07-03 05:07:55 2020-07-03 05:37:14 2020-07-03 06:13:14 0:36:00 0:27:14 0:08:46 smithi master centos 8.0 rgw/verify/{centos_latest clusters/fixed-2 frontend/beast msgr-failures/few objectstore/filestore-xfs overrides proto/http rgw_pool_type/ec sharding$/{default} striping$/{stripe-greater-than-chunk} tasks/{0-install cls ragweed s3tests-java s3tests} validater/valgrind} 2
fail 5196726 2020-07-03 05:02:19 2020-07-03 05:02:28 2020-07-03 05:40:28 0:38:00 0:17:35 0:20:25 smithi master centos 8.1 smoke/basic/{clusters/{fixed-3-cephfs openstack} objectstore/bluestore-bitmap supported-random-distro$/{centos_8} tasks/mon_thrash} 3
Failure Reason:

Command failed on smithi180 with status 128: 'rm -rf /home/ubuntu/cephtest/clone.client.0 && git clone git://git.ceph.com/ceph.git /home/ubuntu/cephtest/clone.client.0 && cd /home/ubuntu/cephtest/clone.client.0 && git checkout baa1ea6a9656c3db06c66032fa80b476721947ba'

pass 5196691 2020-07-03 03:59:40 2020-07-03 04:29:03 2020-07-03 04:55:03 0:26:00 0:15:01 0:10:59 smithi master ubuntu 18.04 perf-basic/{ceph distros/ubuntu_latest objectstore/filestore-xfs settings/optimized workloads/cosbench_64K_write} 1
pass 5196650 2020-07-03 03:18:54 2020-07-03 04:10:52 2020-07-03 04:32:52 0:22:00 0:16:27 0:05:33 smithi master centos 8.1 fs/basic_functional/{begin clusters/1-mds-4-client-coloc conf/{client mds mon osd} mount/fuse objectstore/bluestore-ec-root overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{centos_8} tasks/client-limits} 2
fail 5196600 2020-07-03 03:18:07 2020-07-03 03:48:54 2020-07-03 04:10:53 0:21:59 0:13:50 0:08:09 smithi master centos 8.1 fs/verify/{begin centos_latest clusters/fixed-2-ucephfs conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-comp overrides/{frag_enable mon-debug whitelist_health whitelist_wrongly_marked_down} tasks/cfuse_workunit_suites_dbench validater/lockdep} 2
Failure Reason:

Command failed on smithi180 with status 128: 'rm -rf /home/ubuntu/cephtest/clone.client.0 && git clone git://git.ceph.com/ceph.git /home/ubuntu/cephtest/clone.client.0 && cd /home/ubuntu/cephtest/clone.client.0 && git checkout baa1ea6a9656c3db06c66032fa80b476721947ba'

fail 5196557 2020-07-03 03:17:28 2020-07-03 03:32:48 2020-07-03 03:52:47 0:19:59 0:13:24 0:06:35 smithi master centos 8.1 fs/snaps/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-bitmap overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{centos_8} tasks/snaptests} 2
Failure Reason:

Command failed on smithi131 with status 128: 'rm -rf /home/ubuntu/cephtest/clone.client.0 && git clone git://git.ceph.com/ceph.git /home/ubuntu/cephtest/clone.client.0 && cd /home/ubuntu/cephtest/clone.client.0 && git checkout baa1ea6a9656c3db06c66032fa80b476721947ba'

pass 5196477 2020-07-03 01:51:01 2020-07-03 03:00:31 2020-07-03 03:32:30 0:31:59 0:26:13 0:05:46 smithi master centos 8.1 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-partial-recovery} backoff/normal ceph clusters/{fixed-2 openstack} d-balancer/on msgr-failures/osd-delay msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8} thrashers/pggrow thrashosds-health workloads/cache-pool-snaps} 2
pass 5196410 2020-07-03 01:50:16 2020-07-03 02:40:39 2020-07-03 03:00:38 0:19:59 0:14:49 0:05:10 smithi master centos 8.1 rados/cephadm/upgrade/{1-start 2-start-upgrade 3-wait distro$/{centos_latest} fixed-2} 2
fail 5196321 2020-07-03 01:49:07 2020-07-03 02:10:00 2020-07-03 02:42:00 0:32:00 0:25:10 0:06:50 smithi master rhel 8.1 rados/dashboard/{clusters/{2-node-mgr} debug/mgr objectstore/bluestore-stupid supported-random-distro$/{rhel_8} tasks/dashboard} 2
Failure Reason:

Test failure: test_full_health (tasks.mgr.dashboard.test_health.HealthTest)