Name Machine Type Up Locked Locked Since Locked By OS Type OS Version Arch Description
smithi059.front.sepia.ceph.com smithi True True 2021-01-16 19:55:30.846556 scheduled_nojha@teuthology ubuntu 18.04 x86_64 /home/teuthworker/archive/nojha-2021-01-16_17:36:29-rados-master-distro-basic-smithi/5791922
Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 5792306 2021-01-16 17:58:16 2021-01-16 19:04:02 2021-01-16 19:22:02 0:18:00 0:09:19 0:08:41 smithi master centos 8.2 rados/singleton-nomsgr/{all/large-omap-object-warnings mon_election/classic rados supported-random-distro$/{centos_8}} 1
dead 5792276 2021-01-16 17:57:51 2021-01-16 18:39:47 2021-01-16 19:29:47 0:50:00 smithi master ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus-v1only backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/osd-delay rados thrashers/mapgap thrashosds-health workloads/cache-snaps} 3
running 5791922 2021-01-16 17:38:24 2021-01-16 19:48:18 2021-01-16 20:30:18 0:43:22 smithi master ubuntu 18.04 rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/fastclose objectstore/bluestore-comp-zlib rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} 2
pass 5791899 2021-01-16 17:38:06 2021-01-16 19:30:28 2021-01-16 19:56:28 0:26:00 0:12:05 0:13:55 smithi master centos 8.2 rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/osd-delay objectstore/bluestore-comp-zlib rados recovery-overrides/{more-async-recovery} supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4
pass 5791830 2021-01-16 16:14:20 2021-01-16 16:16:44 2021-01-16 19:04:47 2:48:03 2:34:39 0:13:24 smithi master ubuntu 18.04 upgrade:octopus-x/stress-split-erasure-code-no-cephadm/{0-cluster/{openstack start} 1-nautilus-install/pacific 1.1-pg-log-overrides/normal_pg_log 2-partial-upgrade/firsthalf 3-thrash/default 3.1-objectstore/filestore-xfs 4-ec-workload/{rados-ec-workload rbd-ec-workload} 5-finish-upgrade 6-pacific 7-final-workload mon_election/classic thrashosds-health ubuntu_18.04} 5
pass 5791792 2021-01-16 11:16:03 2021-01-16 12:55:45 2021-01-16 15:35:48 2:40:03 2:29:16 0:10:47 smithi master ubuntu 18.04 powercycle/osd/{clusters/3osd-1per-target objectstore/bluestore-stupid powercycle/default supported-all-distro/ubuntu_latest tasks/radosbench thrashosds-health whitelist_health} 4
pass 5790748 2021-01-16 05:01:20 2021-01-16 08:35:56 2021-01-16 11:05:58 2:30:02 2:16:20 0:13:42 smithi master centos 8.2 smoke/basic/{clusters/{fixed-3-cephfs openstack} objectstore/bluestore-bitmap supported-random-distro$/{centos_8} tasks/{0-install test/rados_bench}} 3
fail 5790655 2021-01-16 04:00:46 2021-01-16 08:01:57 2021-01-16 08:39:57 0:38:00 0:24:31 0:13:29 smithi master rhel 8.3 fs/workload/{begin clusters/1a5s-mds-1c-client-3node conf/{client mds mon osd} distro/{rhel_8} mount/fuse objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 scrub/yes tasks/{0-check-counter workunit/suites/blogbench}} 3
Failure Reason:

"2021-01-16T08:24:59.557999+0000 mds.a (mds.0) 19 : cluster [WRN] Scrub error on inode 0x100000001fe (/client.0/tmp/blogbench-1.0/src) see mds.a log and `damage ls` output for details" in cluster log

pass 5790537 2021-01-16 03:59:10 2021-01-16 06:39:03 2021-01-16 08:11:04 1:32:01 1:21:24 0:10:37 smithi master centos 8.2 fs/traceless/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{centos_8} mount/fuse objectstore-ec/bluestore-comp-ec-root overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} tasks/cfuse_workunit_suites_ffsb traceless/50pc} 2
pass 5790508 2021-01-16 03:58:41 2021-01-16 06:20:43 2021-01-16 06:40:43 0:20:00 0:09:53 0:10:07 smithi master centos 8.2 fs:libcephfs/{begin clusters/1-mds-1-client-coloc conf/{client mds mon osd} distro/{centos_8} objectstore/bluestore-bitmap overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/libcephfs_python} 2
pass 5790268 2021-01-16 02:30:42 2021-01-16 02:31:13 2021-01-16 03:39:13 1:08:00 0:53:24 0:14:36 smithi master ubuntu 18.04 upgrade:mimic-x/stress-split-erasure-code/{0-cluster/{openstack start} 1-luminous-install/mimic 1.1-pg-log-overrides/normal_pg_log 2-partial-upgrade/firsthalf 3-thrash/default 4-ec-workload 5-finish-upgrade 6-octopus 7-final-workload objectstore/bluestore-bitmap thrashosds-health ubuntu_latest} 5
pass 5790203 2021-01-16 01:12:24 2021-01-16 05:40:32 2021-01-16 06:20:32 0:40:00 0:30:38 0:09:22 smithi master centos 8.2 rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/fastclose objectstore/bluestore-bitmap rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=jerasure-k=3-m=1} 2
fail 5790147 2021-01-16 01:11:39 2021-01-16 05:10:31 2021-01-16 05:40:31 0:30:00 0:18:15 0:11:45 smithi master centos 8.0 rados/cephadm/upgrade/{1-start 2-repo_digest/repo_digest 3-start-upgrade 4-wait distro$/{centos_8.0} fixed-2 mon_election/connectivity} 2
Failure Reason:

reached maximum tries (180) after waiting for 180 seconds

fail 5790100 2021-01-16 01:11:01 2021-01-16 04:46:30 2021-01-16 05:12:30 0:26:00 0:15:56 0:10:04 smithi master centos 8.2 rados/cephadm/dashboard/{distro/centos_latest task/test_e2e} 2
Failure Reason:

Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi064 with status 22: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=174f7024c30497be3866b21627946f27bb06fa66 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'

pass 5790046 2021-01-16 01:10:16 2021-01-16 04:18:06 2021-01-16 04:46:05 0:27:59 0:19:06 0:08:53 smithi master centos 8.2 rados/singleton-nomsgr/{all/osd_stale_reads mon_election/connectivity rados supported-random-distro$/{centos_8}} 1
pass 5789959 2021-01-16 01:09:08 2021-01-16 03:08:02 2021-01-16 04:20:03 1:12:01 0:31:25 0:40:36 smithi master ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus-v1only backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/osd-delay rados thrashers/mapgap thrashosds-health workloads/cache-snaps} 3
pass 5789903 2021-01-16 01:08:23 2021-01-16 01:10:41 2021-01-16 01:28:41 0:18:00 0:08:28 0:09:32 smithi master centos 8.2 rados/multimon/{clusters/6 mon_election/classic msgr-failures/few msgr/async-v2only no_pools objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8} tasks/mon_clock_with_skews} 2
fail 5789786 2021-01-15 22:36:34 2021-01-15 23:42:27 2021-01-16 01:12:28 1:30:01 1:07:13 0:22:48 smithi master centos 8.2 fs/workload/{begin clusters/1a5s-mds-1c-client-3node conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount overrides/{distro/testing/{flavor/centos_latest k-testing} ms-die-on-skipped}} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/yes tasks/{0-check-counter workunit/suites/blogbench}} 3
Failure Reason:

Command failed (workunit test suites/blogbench.sh) on smithi059 with status 135: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=3ace308c77fe7d6e4084dd9ccda45599f3efcb3e TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/blogbench.sh'

pass 5789776 2021-01-15 22:36:26 2021-01-15 23:34:56 2021-01-15 23:54:56 0:20:00 0:10:43 0:09:17 smithi master ubuntu 18.04 fs/workload/{begin clusters/1a5s-mds-1c-client-3node conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/testing/{flavor/ubuntu_latest k-testing} ms-die-on-skipped}} objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/no tasks/{0-check-counter workunit/direct_io}} 3
fail 5789730 2021-01-15 22:35:49 2021-01-15 23:03:57 2021-01-15 23:35:57 0:32:00 0:21:52 0:10:08 smithi master centos 8.2 fs/workload/{begin clusters/1a5s-mds-1c-client-3node conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/testing/{flavor/centos_latest k-testing} ms-die-on-skipped}} objectstore-ec/bluestore-comp-ec-root omap_limit/10 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 scrub/no tasks/{0-check-counter workunit/suites/dbench}} 3
Failure Reason:

Command failed (workunit test suites/dbench.sh) on smithi026 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=3ace308c77fe7d6e4084dd9ccda45599f3efcb3e TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/dbench.sh'