Name Machine Type Up Locked Locked Since Locked By OS Type OS Version Arch Description
smithi049.front.sepia.ceph.com smithi True False rhel 8.1 x86_64 None
Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 5197294 2020-07-03 07:07:52 2020-07-03 10:48:20 2020-07-03 11:10:20 0:22:00 0:14:58 0:07:02 smithi master rhel 8.1 rados/standalone/{supported-random-distro$/{rhel_8} workloads/scrub} 1
Failure Reason:

Command failed on smithi049 with status 128: 'rm -rf /home/ubuntu/cephtest/clone.client.0 && git clone git://git.ceph.com/ceph.git /home/ubuntu/cephtest/clone.client.0 && cd /home/ubuntu/cephtest/clone.client.0 && git checkout baa1ea6a9656c3db06c66032fa80b476721947ba'

pass 5197242 2020-07-03 07:07:07 2020-07-03 10:28:14 2020-07-03 10:50:13 0:21:59 0:15:12 0:06:47 smithi master rhel 8.1 rados/multimon/{clusters/6 msgr-failures/few msgr/async-v1only no_pools objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8} tasks/mon_recovery} 2
pass 5197183 2020-07-03 07:06:17 2020-07-03 10:06:03 2020-07-03 10:30:03 0:24:00 0:16:55 0:07:05 smithi master rhel 8.1 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{default} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} d-balancer/on msgr-failures/osd-delay msgr/async objectstore/filestore-xfs rados supported-random-distro$/{rhel_8} thrashers/none thrashosds-health workloads/redirect_promote_tests} 2
fail 5196864 2020-07-03 05:16:59 2020-07-03 05:55:30 2020-07-03 10:07:36 4:12:06 3:53:32 0:18:34 smithi master krbd/singleton/{bluestore-bitmap conf msgr-failures/few tasks/rbd_xfstests} 4
Failure Reason:

Command failed on smithi161 with status 3: "/usr/bin/sudo TESTDIR=/home/ubuntu/cephtest adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage /bin/bash /home/ubuntu/cephtest/run_xfstests.sh -c 1 -f ext4 -t /dev/rbd0 -s /dev/rbd1 -x /tmp/excludegv71a9ln -r -- '-g auto -g blockdev -x clone'"

pass 5196816 2020-07-03 05:07:55 2020-07-03 05:37:13 2020-07-03 06:01:13 0:24:00 0:13:05 0:10:55 smithi master ubuntu rgw/multifs/{clusters/fixed-2 frontend/civetweb objectstore/filestore-xfs overrides rgw_pool_type/ec tasks/rgw_user_quota} 2
fail 5196733 2020-07-03 05:02:24 2020-07-03 05:02:29 2020-07-03 05:40:29 0:38:00 0:19:10 0:18:50 smithi master rhel 8.1 smoke/basic/{clusters/{fixed-3-cephfs openstack} objectstore/bluestore-bitmap supported-random-distro$/{rhel_8} tasks/rados_workunit_loadgen_mix} 3
Failure Reason:

Command failed on smithi005 with status 128: 'rm -rf /home/ubuntu/cephtest/clone.client.0 && git clone git://git.ceph.com/ceph.git /home/ubuntu/cephtest/clone.client.0 && cd /home/ubuntu/cephtest/clone.client.0 && git checkout baa1ea6a9656c3db06c66032fa80b476721947ba'

fail 5196685 2020-07-03 03:19:26 2020-07-03 04:27:01 2020-07-03 04:47:00 0:19:59 0:13:33 0:06:26 smithi master centos 8.1 fs/verify/{begin centos_latest clusters/fixed-2-ucephfs conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-comp-ec-root overrides/{frag_enable mon-debug whitelist_health whitelist_wrongly_marked_down} tasks/cfuse_workunit_suites_dbench validater/lockdep} 2
Failure Reason:

Command failed on smithi005 with status 128: 'rm -rf /home/ubuntu/cephtest/clone.client.0 && git clone git://git.ceph.com/ceph.git /home/ubuntu/cephtest/clone.client.0 && cd /home/ubuntu/cephtest/clone.client.0 && git checkout baa1ea6a9656c3db06c66032fa80b476721947ba'

fail 5196565 2020-07-03 03:17:35 2020-07-03 03:36:42 2020-07-03 04:28:42 0:52:00 0:42:25 0:09:35 smithi master ubuntu 18.04 fs/basic_functional/{begin clusters/1-mds-4-client-coloc conf/{client mds mon osd} mount/fuse objectstore/bluestore-ec-root overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{ubuntu_latest} tasks/asok_dump_tree} 2
Failure Reason:

"2020-07-03T03:52:38.456733+0000 mon.a (mon.0) 171 : cluster [WRN] Health check failed: Reduced data availability: 1 pg inactive, 1 pg peering (PG_AVAILABILITY)" in cluster log

pass 5196507 2020-07-03 01:51:24 2020-07-03 03:14:34 2020-07-03 03:38:33 0:23:59 0:16:51 0:07:08 smithi master rhel 8.1 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-partial-recovery} backoff/normal ceph clusters/{fixed-2 openstack} d-balancer/crush-compat msgr-failures/osd-delay msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{rhel_8} thrashers/mapgap thrashosds-health workloads/cache} 2
pass 5196445 2020-07-03 01:50:37 2020-07-03 02:52:18 2020-07-03 03:14:17 0:21:59 0:16:49 0:05:10 smithi master centos 8.1 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-partial-recovery} backoff/normal ceph clusters/{fixed-2 openstack} d-balancer/crush-compat msgr-failures/osd-delay msgr/async objectstore/filestore-xfs rados supported-random-distro$/{centos_8} thrashers/mapgap thrashosds-health workloads/cache-agent-big} 2
pass 5196359 2020-07-03 01:49:37 2020-07-03 02:22:17 2020-07-03 02:52:16 0:29:59 0:19:36 0:10:23 smithi master ubuntu 18.04 rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} msgr-failures/fastclose objectstore/bluestore-stupid rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4
pass 5196236 2020-07-03 01:47:58 2020-07-03 01:47:59 2020-07-03 02:21:59 0:34:00 0:24:29 0:09:31 smithi master ubuntu 18.04 rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} msgr-failures/few objectstore/bluestore-stupid rados recovery-overrides/{more-async-recovery} supported-random-distro$/{ubuntu_latest} thrashers/none thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} 2
fail 5195978 2020-07-02 20:20:38 2020-07-02 20:21:05 2020-07-02 23:09:07 2:48:02 2:41:43 0:06:19 smithi master centos 8.1 rados:standalone/{supported-random-distro$/{centos_8} workloads/osd} 1
Failure Reason:

Command failed (workunit test osd/osd-rep-recov-eio.sh) on smithi049 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=5fbbbaa1143e921c2d75a43288975f5a755d72b8 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/osd/osd-rep-recov-eio.sh'

pass 5195833 2020-07-02 15:41:42 2020-07-02 18:48:08 2020-07-02 19:22:08 0:34:00 0:26:15 0:07:45 smithi master rhel 8.1 rbd/qemu/{cache/writethrough clusters/{fixed-3 openstack} features/defaults msgr-failures/few objectstore/bluestore-low-osd-mem-target pool/ec-cache-pool supported-random-distro$/{rhel_8} workloads/qemu_bonnie} 3
pass 5195796 2020-07-02 15:40:40 2020-07-02 18:07:34 2020-07-02 18:51:34 0:44:00 0:19:46 0:24:14 smithi master centos 8.1 rbd/librbd/{cache/writeback clusters/{fixed-3 openstack} config/permit-partial-discard min-compat-client/default msgr-failures/few objectstore/bluestore-comp-snappy pool/replicated-data-pool supported-random-distro$/{centos_8} workloads/python_api_tests_with_journaling} 3
pass 5195702 2020-07-02 15:35:56 2020-07-02 17:15:51 2020-07-02 18:25:52 1:10:01 0:45:58 0:24:03 smithi master centos 8.1 rbd/mirror-thrash/{base/install clients/mirror cluster/{2-node openstack} msgr-failures/few objectstore/bluestore-comp-zstd policy/simple rbd-mirror/four-per-cluster supported-random-distro$/{centos_8} workloads/rbd-mirror-fsx-workunit} 2
fail 5195630 2020-07-02 15:31:41 2020-07-02 16:17:14 2020-07-02 17:35:16 1:18:02 0:39:26 0:38:36 smithi master ubuntu 18.04 rbd/librbd/{cache/none clusters/{fixed-3 openstack} config/copy-on-read min-compat-client/default msgr-failures/few objectstore/bluestore-low-osd-mem-target pool/ec-data-pool supported-random-distro$/{ubuntu_latest} workloads/python_api_tests} 3
Failure Reason:

"2020-07-02T17:00:36.763137+0000 mon.a (mon.0) 130 : cluster [WRN] Health check failed: Reduced data availability: 1 pg inactive, 1 pg peering (PG_AVAILABILITY)" in cluster log

pass 5195557 2020-07-02 15:27:18 2020-07-02 15:27:19 2020-07-02 16:45:20 1:18:01 0:25:55 0:52:06 smithi master ubuntu 18.04 rbd/librbd/{cache/none clusters/{fixed-3 openstack} config/permit-partial-discard min-compat-client/default msgr-failures/few objectstore/bluestore-comp-snappy pool/small-cache-pool supported-random-distro$/{ubuntu_latest} workloads/c_api_tests} 3
fail 5195532 2020-07-02 15:26:57 2020-07-02 15:26:59 2020-07-02 16:08:58 0:41:59 0:27:38 0:14:21 smithi master ubuntu 18.04 rbd/librbd/{cache/none clusters/{fixed-3 openstack} config/none min-compat-client/default msgr-failures/few objectstore/bluestore-comp-lz4 pool/ec-data-pool supported-random-distro$/{ubuntu_latest} workloads/c_api_tests} 3
Failure Reason:

"2020-07-02T15:59:37.584069+0000 mon.a (mon.0) 684 : cluster [WRN] Health check failed: Reduced data availability: 1 pg inactive, 1 pg peering (PG_AVAILABILITY)" in cluster log

fail 5195464 2020-07-02 11:17:35 2020-07-02 11:17:48 2020-07-02 12:07:47 0:49:59 0:16:34 0:33:25 smithi master ubuntu 18.04 powercycle/osd/{clusters/3osd-1per-target objectstore/bluestore-comp-zstd powercycle/default supported-all-distro/ubuntu_latest tasks/cfuse_workunit_suites_ffsb thrashosds-health whitelist_health} 4
Failure Reason:

Command failed on smithi055 with status 128: 'rm -rf /home/ubuntu/cephtest/clone.client.0 && git clone git://git.ceph.com/ceph.git /home/ubuntu/cephtest/clone.client.0 && cd /home/ubuntu/cephtest/clone.client.0 && git checkout baa1ea6a9656c3db06c66032fa80b476721947ba'