Name Machine Type Up Locked Locked Since Locked By OS Type OS Version Arch Description
smithi012.front.sepia.ceph.com smithi True True 2020-08-07 20:20:08.944014 scheduled_yuriw@teuthology ubuntu 18.04 x86_64 /home/teuthworker/archive/yuriw-2020-08-07_15:05:02-multimds-wip-yuri4-testing-2020-08-07-1350-nautilus-distro-basic-smithi/5308808
Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
dead 5315720 2020-08-07 17:19:39 2020-08-07 18:06:16 2020-08-07 18:20:14 0:13:58 smithi master rhel 8.1 rados:thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-recovery} backoff/peering ceph clusters/{fixed-2 openstack} d-balancer/on msgr-failures/fastclose msgr/async-v2only objectstore/bluestore-comp-snappy rados supported-random-distro$/{rhel_8} thrashers/none thrashosds-health workloads/cache-agent-big} 2
fail 5315650 2020-08-07 17:18:36 2020-08-07 17:29:56 2020-08-07 18:07:56 0:38:00 0:08:15 0:29:45 smithi master centos 8.1 rados:thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-partial-recovery} backoff/peering ceph clusters/{fixed-2 openstack} d-balancer/on msgr-failures/fastclose msgr/async-v2only objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8} thrashers/none thrashosds-health workloads/snaps-few-objects-localized} 2
Failure Reason:

Command crashed: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --localize-reads --max-ops 4000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op read 100 --op write 50 --op delete 50 --op snap_create 50 --op snap_remove 50 --op rollback 50 --op copy_from 50 --op write_excl 50 --pool unique_pool_0'

pass 5309023 2020-08-07 15:08:26 2020-08-07 15:15:31 2020-08-07 17:53:35 2:38:04 2:24:46 0:13:18 smithi master centos 7.8 fs/basic_functional/{begin clusters/1-mds-4-client-coloc conf/{client mds mon osd} mount/fuse objectstore/bluestore-bitmap overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{centos_latest} tasks/volumes} 2
pass 5308851 2020-08-07 15:06:37 2020-08-07 19:52:43 2020-08-07 20:20:41 0:27:58 0:14:45 0:13:13 smithi master rhel 7.8 multimds/thrash/{0-supported-random-distro$/{ubuntu_16.04} begin ceph-thrash/default clusters/9-mds-3-standby conf/{client mds mon osd} mount/kclient/{mount overrides/{distro/rhel/{k-distro rhel_latest} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-ec-root overrides/{fuse-default-perm-no thrash/{frag_enable whitelist_health whitelist_wrongly_marked_down} thrash_debug} tasks/cfuse_workunit_suites_pjd} 3
pass 5308811 2020-08-07 15:06:13 2020-08-07 19:34:58 2020-08-07 20:00:58 0:26:00 0:15:35 0:10:25 smithi master rhel 7.8 multimds/basic/{0-supported-random-distro$/{centos_latest} begin clusters/3-mds conf/{client mds mon osd} inline/no mount/kclient/{mount overrides/{distro/rhel/{k-distro rhel_latest} ms-die-on-skipped}} objectstore-ec/bluestore-bitmap overrides/{basic/{frag_enable whitelist_health whitelist_wrongly_marked_down} fuse-default-perm-no} q_check_counter/check_counter tasks/cephfs_test_exports} 3
running 5308808 2020-08-07 15:06:11 2020-08-07 19:34:43 2020-08-07 20:40:43 1:06:20 smithi master ubuntu 18.04 multimds/basic/{0-supported-random-distro$/{ubuntu_latest} begin clusters/3-mds conf/{client mds mon osd} inline/no mount/fuse objectstore-ec/bluestore-ec-root overrides/{basic/{frag_enable whitelist_health whitelist_wrongly_marked_down} fuse-default-perm-no} q_check_counter/check_counter tasks/cfuse_workunit_suites_fsx} 3
pass 5308727 2020-08-07 15:05:21 2020-08-07 19:01:02 2020-08-07 19:39:02 0:38:00 0:18:14 0:19:46 smithi master rhel 7.8 multimds/basic/{0-supported-random-distro$/{rhel_latest} begin clusters/3-mds conf/{client mds mon osd} inline/no mount/kclient/{mount overrides/{distro/rhel/{k-distro rhel_latest} ms-die-on-skipped}} objectstore-ec/filestore-xfs overrides/{basic/{frag_enable whitelist_health whitelist_wrongly_marked_down} fuse-default-perm-no} q_check_counter/check_counter tasks/cfuse_workunit_suites_fsstress} 3
pass 5308678 2020-08-07 15:04:47 2020-08-07 18:44:15 2020-08-07 19:14:15 0:30:00 0:19:37 0:10:23 smithi master rhel 7.8 kcephfs/cephfs/{begin clusters/1-mds-1-client conf/{client mds mon osd} inline/yes kclient/{mount overrides/{distro/rhel/{k-distro rhel_latest} ms-die-on-skipped}} objectstore-ec/bluestore-comp overrides/{frag_enable log-config osd-asserts whitelist_health whitelist_wrongly_marked_down} tasks/kclient_workunit_suites_fsx} 3
pass 5306766 2020-08-07 14:26:10 2020-08-07 18:18:40 2020-08-07 18:48:40 0:30:00 0:22:55 0:07:05 smithi master centos 8.1 rados:cephadm/with-work/{distro/centos_latest fixed-2 mode/root msgr/async start tasks/rados_python} 2
dead 5306546 2020-08-07 14:21:45 2020-08-07 14:23:09 2020-08-07 15:15:07 0:51:58 smithi master ubuntu 18.04 rados:thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-partial-recovery} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} d-balancer/crush-compat msgr-failures/few msgr/async-v1only objectstore/filestore-xfs rados supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/radosbench-high-concurrency} 2
fail 5306449 2020-08-07 14:18:59 2020-08-07 14:20:35 2020-08-07 14:38:28 0:17:53 0:08:59 0:08:54 smithi master ubuntu 18.04 rados:thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-active-recovery} backoff/peering ceph clusters/{fixed-2 openstack} d-balancer/on msgr-failures/fastclose msgr/async objectstore/bluestore-comp-lz4 rados supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/redirect_promote_tests} 2
Failure Reason:

Command crashed: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --set_redirect --low_tier_pool low_tier --max-ops 4000 --objects 500 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op read 50 --op write 50 --op delete 10 --op write_excl 50 --pool unique_pool_0'

fail 5300898 2020-08-07 12:30:30 2020-08-07 12:31:01 2020-08-07 12:49:00 0:17:59 0:10:31 0:07:28 smithi master centos 8.1 rados:thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{default} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} d-balancer/on msgr-failures/osd-delay msgr/async-v2only objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8} thrashers/morepggrow thrashosds-health workloads/cache-snaps-balanced} 2
Failure Reason:

Command crashed: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --balance-reads --max-ops 4000 --objects 500 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op read 100 --op write 50 --op delete 50 --op snap_create 50 --op snap_remove 50 --op rollback 50 --op copy_from 50 --op cache_flush 50 --op cache_try_flush 50 --op cache_evict 50 --op write_excl 50 --pool base'

fail 5295753 2020-08-07 08:01:04 2020-08-07 08:07:20 2020-08-07 08:33:20 0:26:00 0:15:17 0:10:43 smithi master ubuntu 18.04 smoke/basic/{clusters/{fixed-3-cephfs openstack} objectstore/bluestore-bitmap supported-random-distro$/{ubuntu_latest} tasks/rgw_ec_s3tests} 3
Failure Reason:

Command failed (s3 tests against rgw) on smithi012 with status 1: "S3TEST_CONF=/home/ubuntu/cephtest/archive/s3-tests.client.0.conf BOTO_CONFIG=/home/ubuntu/cephtest/boto.cfg REQUESTS_CA_BUNDLE=/etc/ssl/certs/ca-certificates.crt /home/ubuntu/cephtest/s3-tests/virtualenv/bin/python -m nose -w /home/ubuntu/cephtest/s3-tests -v -a '!fails_on_rgw,!lifecycle_expiration,!fails_strict_rfc2616,!fails_with_subdomain'"

pass 5295370 2020-08-07 07:00:46 2020-08-07 07:02:51 2020-08-07 08:08:52 1:06:01 0:53:21 0:12:40 smithi master smoke/basic/{clusters/{fixed-3-cephfs openstack} objectstore/bluestore-bitmap tasks/kclient_workunit_suites_dbench} 3
pass 5295357 2020-08-07 05:17:10 2020-08-07 13:33:02 2020-08-07 14:13:02 0:40:00 0:17:04 0:22:56 smithi master krbd/rbd/{bluestore-bitmap clusters/fixed-3 conf msgr-failures/many tasks/rbd_workunit_trivial_sync} 3
pass 5294890 2020-08-07 03:36:11 2020-08-07 12:01:52 2020-08-07 12:29:52 0:28:00 0:20:37 0:07:23 smithi master centos 8.1 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-lz4 rados tasks/rados_cls_all validater/lockdep} 2
pass 5294837 2020-08-07 03:35:22 2020-08-07 12:58:57 2020-08-07 13:20:56 0:21:59 0:14:45 0:07:14 smithi master rhel 7.7 rados/cephadm/smoke/{distro/rhel_7 fixed-2 start} 2
pass 5294451 2020-08-07 02:19:45 2020-08-07 04:06:14 2020-08-07 04:48:14 0:42:00 0:32:25 0:09:35 smithi master ubuntu 18.04 fs/basic_functional/{begin clusters/1-mds-4-client-coloc conf/{client mds mon osd} mount/fuse objectstore/bluestore-ec-root overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{ubuntu_latest} tasks/client-recovery} 2
pass 5294422 2020-08-07 02:19:16 2020-08-07 03:52:14 2020-08-07 04:10:14 0:18:00 0:10:11 0:07:49 smithi master ubuntu 18.04 fs/thrash/{begin ceph-thrash/mon clusters/1-mds-1-client-coloc conf/{client mds mon osd} mount/fuse msgr-failures/none objectstore-ec/bluestore-bitmap overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{ubuntu_latest} tasks/cfuse_workunit_trivial_sync} 2
pass 5294336 2020-08-07 02:17:50 2020-08-07 02:21:54 2020-08-07 03:53:55 1:32:01 1:13:25 0:18:36 smithi master rhel 8.1 fs/snaps/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-bitmap overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{rhel_8} tasks/snaptests} 2