Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 5232602 2020-07-17 01:13:03 2020-07-17 01:30:56 2020-07-17 02:10:56 0:40:00 0:13:49 0:26:11 smithi master centos 7.6 rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/luminous backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{centos_7.6} msgr-failures/fastclose rados thrashers/morepggrow thrashosds-health workloads/test_rbd_api} 3
Failure Reason:

Command failed on smithi129 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:2aed1595b49f256bf6c2a2a510fa4ee002ed6cd0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid a541b7e0-c7d1-11ea-a06f-001a4aab830c -- ceph orch daemon add osd smithi129:vg_nvme/lv_4'

fail 5232603 2020-07-17 01:13:04 2020-07-17 01:30:57 2020-07-17 01:44:56 0:13:59 0:07:38 0:06:21 smithi master centos 8.1 rados/cephadm/workunits/{distro/centos_latest task/test_orch_cli} 1
Failure Reason:

Command failed on smithi122 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:2aed1595b49f256bf6c2a2a510fa4ee002ed6cd0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 9f9045e4-c7ce-11ea-a06f-001a4aab830c -- ceph orch daemon add osd smithi122:vg_nvme/lv_4'

pass 5232604 2020-07-17 01:13:05 2020-07-17 01:30:59 2020-07-17 01:46:59 0:16:00 0:06:31 0:09:29 smithi master ubuntu 18.04 rados/multimon/{clusters/6 msgr-failures/few msgr/async-v2only no_pools objectstore/bluestore-comp-lz4 rados supported-random-distro$/{ubuntu_latest} tasks/mon_clock_with_skews} 2
pass 5232605 2020-07-17 01:13:06 2020-07-17 01:31:00 2020-07-17 02:13:00 0:42:00 0:26:45 0:15:15 smithi master ubuntu 18.04 rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} msgr-failures/few objectstore/bluestore-comp-lz4 rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} 3
pass 5232606 2020-07-17 01:13:07 2020-07-17 01:31:00 2020-07-17 02:00:59 0:29:59 0:21:46 0:08:13 smithi master ubuntu 18.04 rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} msgr-failures/few objectstore/bluestore-comp-lz4 rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/none thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} 2
pass 5232607 2020-07-17 01:13:08 2020-07-17 01:31:01 2020-07-17 01:49:00 0:17:59 0:11:32 0:06:27 smithi master centos 8.1 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-lz4 rados tasks/mon_recovery validater/lockdep} 2
pass 5232608 2020-07-17 01:13:09 2020-07-17 01:33:04 2020-07-17 02:01:04 0:28:00 0:17:14 0:10:46 smithi master ubuntu 18.04 rados/basic/{ceph clusters/{fixed-2 openstack} msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{ubuntu_latest} tasks/rados_cls_all} 2
pass 5232609 2020-07-17 01:13:10 2020-07-17 01:33:04 2020-07-17 02:09:04 0:36:00 0:27:22 0:08:38 smithi master ubuntu 18.04 rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast msgr-failures/few objectstore/bluestore-comp-lz4 rados recovery-overrides/{default} supported-random-distro$/{ubuntu_latest} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=jerasure-k=3-m=1} 2
pass 5232610 2020-07-17 01:13:11 2020-07-17 01:33:05 2020-07-17 01:51:04 0:17:59 0:11:47 0:06:12 smithi master rhel 8.1 rados/singleton/{all/deduptool msgr-failures/many msgr/async objectstore/filestore-xfs rados supported-random-distro$/{rhel_8}} 1
pass 5232611 2020-07-17 01:13:12 2020-07-17 01:34:50 2020-07-17 02:12:50 0:38:00 0:30:23 0:07:37 smithi master rhel 8.1 rados/monthrash/{ceph clusters/3-mons msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8} thrashers/force-sync-many workloads/rados_mon_workunits} 2
fail 5232612 2020-07-17 01:13:13 2020-07-17 01:34:50 2020-07-17 01:56:50 0:22:00 0:15:06 0:06:54 smithi master rhel 8.0 rados/cephadm/with-work/{distro/rhel_8.0 fixed-2 mode/packaged msgr/async-v1only start tasks/rados_api_tests} 2
Failure Reason:

Command failed on smithi080 with status 22: 'sudo cephadm --image quay.ceph.io/ceph-ci/ceph:2aed1595b49f256bf6c2a2a510fa4ee002ed6cd0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f3dad8f2-c7cf-11ea-a06f-001a4aab830c -- ceph orch daemon add osd smithi080:vg_nvme/lv_4'

pass 5232613 2020-07-17 01:13:14 2020-07-17 01:35:01 2020-07-17 01:55:00 0:19:59 0:12:31 0:07:28 smithi master centos 8.1 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{default} backoff/normal ceph clusters/{fixed-2 openstack} d-balancer/on msgr-failures/fastclose msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{centos_8} thrashers/morepggrow thrashosds-health workloads/cache-agent-small} 2
fail 5232614 2020-07-17 01:13:16 2020-07-17 01:35:04 2020-07-17 02:09:04 0:34:00 0:25:34 0:08:26 smithi master rhel 8.1 rados/dashboard/{clusters/{2-node-mgr} debug/mgr objectstore/bluestore-bitmap supported-random-distro$/{rhel_8} tasks/dashboard} 2
Failure Reason:

Test failure: test_get_export (tasks.mgr.dashboard.test_ganesha.GaneshaTest)

pass 5232615 2020-07-17 01:13:17 2020-07-17 01:36:56 2020-07-17 01:54:55 0:17:59 0:12:34 0:05:25 smithi master rhel 8.1 rados/objectstore/{backends/alloc-hint supported-random-distro$/{rhel_8}} 1
pass 5232616 2020-07-17 01:13:18 2020-07-17 01:36:56 2020-07-17 01:56:55 0:19:59 0:13:40 0:06:19 smithi master rhel 8.1 rados/rest/{mgr-restful supported-random-distro$/{rhel_8}} 1
pass 5232617 2020-07-17 01:13:19 2020-07-17 01:36:56 2020-07-17 01:56:55 0:19:59 0:13:33 0:06:26 smithi master centos 8.1 rados/singleton-nomsgr/{all/admin_socket_output rados supported-random-distro$/{centos_8}} 1
pass 5232618 2020-07-17 01:13:20 2020-07-17 01:36:56 2020-07-17 01:56:55 0:19:59 0:14:27 0:05:32 smithi master rhel 8.1 rados/standalone/{supported-random-distro$/{rhel_8} workloads/crush} 1
fail 5232619 2020-07-17 01:13:21 2020-07-17 01:36:56 2020-07-17 03:18:58 1:42:02 0:16:49 1:25:13 smithi master ubuntu 18.04 rados/upgrade/nautilus-x-singleton/{0-cluster/{openstack start} 1-install/nautilus 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{rbd-cls rbd-import-export readwrite snaps-few-objects} 5-workload/{radosbench rbd_api} 6-finish-upgrade 7-octopus 8-workload/{rbd-python snaps-many-objects} bluestore-bitmap thrashosds-health ubuntu_latest} 4
Failure Reason:

Command failed (workunit test cls/test_cls_rbd.sh) on smithi035 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=nautilus TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_rbd.sh'

pass 5232620 2020-07-17 01:13:22 2020-07-17 01:37:00 2020-07-17 02:03:00 0:26:00 0:19:08 0:06:52 smithi master centos 8.1 rados/valgrind-leaks/{1-start 2-inject-leak/mon centos_latest} 1
pass 5232621 2020-07-17 01:13:23 2020-07-17 01:37:01 2020-07-17 01:55:01 0:18:00 0:11:45 0:06:15 smithi master ubuntu 18.04 rados/cephadm/orchestrator_cli/{2-node-mgr orchestrator_cli supported-random-distro$/{ubuntu_latest}} 2
pass 5232622 2020-07-17 01:13:24 2020-07-17 01:37:02 2020-07-17 01:55:01 0:17:59 0:10:16 0:07:43 smithi master ubuntu 18.04 rados/perf/{ceph objectstore/bluestore-low-osd-mem-target openstack settings/optimized ubuntu_latest workloads/fio_4M_rand_read} 1
pass 5232623 2020-07-17 01:13:25 2020-07-17 01:37:13 2020-07-17 01:57:13 0:20:00 0:13:13 0:06:47 smithi master rhel 8.1 rados/singleton/{all/divergent_priors msgr-failures/few msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{rhel_8}} 1
pass 5232624 2020-07-17 01:13:26 2020-07-17 01:37:22 2020-07-17 02:11:22 0:34:00 0:10:54 0:23:06 smithi master ubuntu 18.04 rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} msgr-failures/few objectstore/bluestore-comp-snappy rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4
fail 5232625 2020-07-17 01:13:27 2020-07-17 01:37:50 2020-07-17 01:53:49 0:15:59 0:07:56 0:08:03 smithi master centos 8.0 rados/cephadm/smoke/{distro/centos_8.0 fixed-2 start} 2
Failure Reason:

Command failed on smithi037 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:2aed1595b49f256bf6c2a2a510fa4ee002ed6cd0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 8ed20f02-c7cf-11ea-a06f-001a4aab830c -- ceph orch daemon add osd smithi037:vg_nvme/lv_4'

pass 5232626 2020-07-17 01:13:28 2020-07-17 01:38:55 2020-07-17 02:10:55 0:32:00 0:18:46 0:13:14 smithi master centos 8.1 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-recovery} backoff/peering ceph clusters/{fixed-2 openstack} d-balancer/crush-compat msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8} thrashers/none thrashosds-health workloads/cache-pool-snaps-readproxy} 2
fail 5232627 2020-07-17 01:13:29 2020-07-17 01:38:56 2020-07-17 01:56:55 0:17:59 0:07:38 0:10:21 smithi master centos 8.0 rados/cephadm/smoke-roleless/{distro/centos_8.0 start} 2
Failure Reason:

Command failed on smithi160 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:2aed1595b49f256bf6c2a2a510fa4ee002ed6cd0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 0358796a-c7d0-11ea-a06f-001a4aab830c -- ceph orch daemon add osd smithi160:vg_nvme/lv_4'

pass 5232628 2020-07-17 01:13:30 2020-07-17 01:38:56 2020-07-17 01:58:55 0:19:59 0:12:55 0:07:04 smithi master rhel 8.1 rados/singleton/{all/divergent_priors2 msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8}} 1
pass 5232629 2020-07-17 01:13:31 2020-07-17 01:39:02 2020-07-17 02:01:01 0:21:59 0:15:26 0:06:33 smithi master ubuntu 18.04 rados/mgr/{clusters/{2-node-mgr} debug/mgr objectstore/bluestore-comp-snappy supported-random-distro$/{ubuntu_latest} tasks/progress} 2
dead 5232630 2020-07-17 01:13:33 2020-07-17 01:40:11 2020-07-17 13:42:50 12:02:39 smithi master centos 8.0 rados/cephadm/upgrade/{1-start 2-start-upgrade 3-wait distro$/{centos_8.0} fixed-2} 2
pass 5232631 2020-07-17 01:13:34 2020-07-17 01:40:41 2020-07-17 02:02:40 0:21:59 0:13:26 0:08:33 smithi master rhel 8.1 rados/singleton-nomsgr/{all/balancer rados supported-random-distro$/{rhel_8}} 1
pass 5232632 2020-07-17 01:13:35 2020-07-17 01:40:50 2020-07-17 01:58:50 0:18:00 0:10:14 0:07:46 smithi master ubuntu 18.04 rados/perf/{ceph objectstore/bluestore-stupid openstack settings/optimized ubuntu_latest workloads/fio_4M_rand_rw} 1
pass 5232633 2020-07-17 01:13:36 2020-07-17 01:41:01 2020-07-17 02:19:01 0:38:00 0:27:39 0:10:21 smithi master ubuntu 18.04 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-partial-recovery} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} d-balancer/on msgr-failures/osd-delay msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/cache-pool-snaps} 2
fail 5232634 2020-07-17 01:13:37 2020-07-17 01:43:23 2020-07-17 02:13:22 0:29:59 0:17:39 0:12:20 smithi master rhel 8.1 rados/basic/{ceph clusters/{fixed-2 openstack} msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{rhel_8} tasks/rados_python} 2
Failure Reason:

Command failed (workunit test rados/test_python.sh) on smithi109 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=2aed1595b49f256bf6c2a2a510fa4ee002ed6cd0 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_python.sh'

pass 5232635 2020-07-17 01:13:38 2020-07-17 01:43:23 2020-07-17 02:01:22 0:17:59 0:07:18 0:10:41 smithi master centos 8.1 rados/cephadm/workunits/{distro/centos_latest task/test_adoption} 1
pass 5232636 2020-07-17 01:13:39 2020-07-17 01:43:23 2020-07-17 02:03:22 0:19:59 0:12:45 0:07:14 smithi master rhel 8.1 rados/singleton/{all/dump-stuck msgr-failures/few msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{rhel_8}} 1
fail 5232637 2020-07-17 01:13:41 2020-07-17 01:43:23 2020-07-17 02:15:22 0:31:59 0:13:04 0:18:55 smithi master centos 7.6 rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/mimic-v1only backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{centos_7.6} msgr-failures/few rados thrashers/none thrashosds-health workloads/cache-snaps} 3
Failure Reason:

Command failed on smithi065 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:2aed1595b49f256bf6c2a2a510fa4ee002ed6cd0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 92a5d3e0-c7d2-11ea-a06f-001a4aab830c -- ceph orch daemon add osd smithi065:vg_nvme/lv_4'

pass 5232638 2020-07-17 01:13:42 2020-07-17 01:43:46 2020-07-17 02:19:46 0:36:00 0:18:25 0:17:35 smithi master centos 8.1 rados/multimon/{clusters/9 msgr-failures/many msgr/async no_pools objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8} tasks/mon_recovery} 3
pass 5232639 2020-07-17 01:13:43 2020-07-17 01:45:18 2020-07-17 02:53:19 1:08:01 0:15:12 0:52:49 smithi master rhel 8.1 rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} msgr-failures/osd-delay objectstore/bluestore-comp-snappy rados recovery-overrides/{more-async-recovery} supported-random-distro$/{rhel_8} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=lrc-k=4-m=2-l=3} 3
pass 5232640 2020-07-17 01:13:44 2020-07-17 01:45:19 2020-07-17 02:21:18 0:35:59 0:26:05 0:09:54 smithi master centos 8.1 rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} msgr-failures/osd-delay objectstore/bluestore-comp-snappy rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{centos_8} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} 2
pass 5232641 2020-07-17 01:13:45 2020-07-17 01:45:19 2020-07-17 02:37:19 0:52:00 0:44:17 0:07:43 smithi master centos 8.1 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none msgr-failures/few msgr/async objectstore/bluestore-comp-snappy rados tasks/rados_api_tests validater/valgrind} 2
pass 5232642 2020-07-17 01:13:46 2020-07-17 01:45:18 2020-07-17 03:31:20 1:46:02 1:38:42 0:07:20 smithi master ubuntu 18.04 rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal msgr-failures/osd-delay objectstore/bluestore-comp-snappy rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/ec-radosbench} 2
fail 5232643 2020-07-17 01:13:46 2020-07-17 01:47:15 2020-07-17 02:13:14 0:25:59 0:14:26 0:11:33 smithi master rhel 8.1 rados/cephadm/with-work/{distro/rhel_latest fixed-2 mode/root msgr/async-v2only start tasks/rados_python} 2
Failure Reason:

Command failed on smithi037 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:2aed1595b49f256bf6c2a2a510fa4ee002ed6cd0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 1685cdf6-c7d2-11ea-a06f-001a4aab830c -- ceph orch daemon add osd smithi037:vg_nvme/lv_4'

pass 5232644 2020-07-17 01:13:47 2020-07-17 01:47:15 2020-07-17 02:37:15 0:50:00 0:42:06 0:07:54 smithi master ubuntu 18.04 rados/monthrash/{ceph clusters/9-mons msgr-failures/mon-delay msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest} thrashers/many workloads/snaps-few-objects} 2
pass 5232645 2020-07-17 01:13:48 2020-07-17 01:47:15 2020-07-17 02:31:14 0:43:59 0:37:18 0:06:41 smithi master rhel 8.1 rados/singleton/{all/ec-lost-unfound msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{rhel_8}} 1
pass 5232646 2020-07-17 01:13:49 2020-07-17 01:48:57 2020-07-17 02:12:57 0:24:00 0:18:22 0:05:38 smithi master ubuntu 18.04 rados/cephadm/workunits/{distro/ubuntu_18.04_podman task/test_cephadm} 1
pass 5232647 2020-07-17 01:13:50 2020-07-17 01:48:58 2020-07-17 02:26:58 0:38:00 0:26:54 0:11:06 smithi master rhel 8.1 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} backoff/normal ceph clusters/{fixed-2 openstack} d-balancer/crush-compat msgr-failures/fastclose msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/cache-snaps-balanced} 2
pass 5232648 2020-07-17 01:13:51 2020-07-17 01:48:58 2020-07-17 02:32:58 0:44:00 0:31:53 0:12:07 smithi master rhel 8.1 rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/fast msgr-failures/osd-delay rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{rhel_8} thrashers/fastread thrashosds-health workloads/ec-pool-snaps-few-objects-overwrites} 2
pass 5232649 2020-07-17 01:13:52 2020-07-17 01:49:02 2020-07-17 02:29:02 0:40:00 0:32:53 0:07:07 smithi master rhel 8.1 rados/singleton-bluestore/{all/cephtool msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-snappy rados supported-random-distro$/{rhel_8}} 1
fail 5232650 2020-07-17 01:13:53 2020-07-17 01:51:01 2020-07-17 02:19:00 0:27:59 0:19:36 0:08:23 smithi master centos 8.1 rados/dashboard/{clusters/{2-node-mgr} debug/mgr objectstore/bluestore-comp-lz4 supported-random-distro$/{centos_8} tasks/dashboard} 2
Failure Reason:

Test failure: test_get_export (tasks.mgr.dashboard.test_ganesha.GaneshaTest)

pass 5232651 2020-07-17 01:13:54 2020-07-17 01:51:05 2020-07-17 02:15:05 0:24:00 0:15:21 0:08:39 smithi master rhel 8.1 rados/objectstore/{backends/ceph_objectstore_tool supported-random-distro$/{rhel_8}} 1
pass 5232652 2020-07-17 01:13:56 2020-07-17 01:51:12 2020-07-17 02:31:12 0:40:00 0:11:02 0:28:58 smithi master ubuntu 18.04 rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} msgr-failures/osd-delay objectstore/bluestore-comp-zlib rados recovery-overrides/{default} supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4
pass 5232653 2020-07-17 01:13:57 2020-07-17 01:54:06 2020-07-17 02:08:05 0:13:59 0:08:04 0:05:55 smithi master ubuntu 18.04 rados/singleton-nomsgr/{all/cache-fs-trunc rados supported-random-distro$/{ubuntu_latest}} 1
pass 5232654 2020-07-17 01:13:58 2020-07-17 01:54:33 2020-07-17 02:12:32 0:17:59 0:10:35 0:07:24 smithi master ubuntu 18.04 rados/perf/{ceph objectstore/bluestore-basic-min-osd-mem-target openstack settings/optimized ubuntu_latest workloads/fio_4M_rand_write} 1
fail 5232655 2020-07-17 01:13:59 2020-07-17 01:54:51 2020-07-17 02:08:50 0:13:59 0:06:26 0:07:33 smithi master centos 8.1 rados/cephadm/smoke/{distro/centos_latest fixed-2 start} 2
Failure Reason:

Command failed on smithi187 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:2aed1595b49f256bf6c2a2a510fa4ee002ed6cd0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 9f0c7068-c7d1-11ea-a06f-001a4aab830c -- ceph orch daemon add osd smithi187:vg_nvme/lv_4'

pass 5232656 2020-07-17 01:14:00 2020-07-17 01:54:56 2020-07-17 02:08:56 0:14:00 0:06:14 0:07:46 smithi master centos 8.1 rados/singleton/{all/erasure-code-nonregression msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8}} 1
fail 5232657 2020-07-17 01:14:01 2020-07-17 01:54:56 2020-07-17 02:08:56 0:14:00 0:06:08 0:07:52 smithi master centos 8.1 rados/cephadm/smoke-roleless/{distro/centos_latest start} 2
Failure Reason:

Command failed on smithi132 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:2aed1595b49f256bf6c2a2a510fa4ee002ed6cd0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid cd8d100a-c7d1-11ea-a06f-001a4aab830c -- ceph orch daemon add osd smithi132:vg_nvme/lv_4'

pass 5232658 2020-07-17 01:14:02 2020-07-17 01:54:58 2020-07-17 02:12:58 0:18:00 0:10:13 0:07:47 smithi master centos 8.1 rados/mgr/{clusters/{2-node-mgr} debug/mgr objectstore/bluestore-comp-zlib supported-random-distro$/{centos_8} tasks/prometheus} 2
pass 5232659 2020-07-17 01:14:03 2020-07-17 01:55:02 2020-07-17 02:37:02 0:42:00 0:35:35 0:06:25 smithi master ubuntu 18.04 rados/standalone/{supported-random-distro$/{ubuntu_latest} workloads/erasure-code} 1
pass 5232660 2020-07-17 01:14:04 2020-07-17 01:55:02 2020-07-17 02:35:02 0:40:00 0:27:23 0:12:37 smithi master rhel 8.1 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-active-recovery} backoff/peering ceph clusters/{fixed-2 openstack} d-balancer/on msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{rhel_8} thrashers/default thrashosds-health workloads/cache-snaps} 2
fail 5232661 2020-07-17 01:14:05 2020-07-17 01:55:03 2020-07-17 02:17:02 0:21:59 0:09:28 0:12:31 smithi master ubuntu 18.04 rados/cephadm/with-work/{distro/ubuntu_18.04 fixed-2 mode/packaged msgr/async start tasks/rados_api_tests} 2
Failure Reason:

Command failed on smithi158 with status 22: 'sudo cephadm --image quay.ceph.io/ceph-ci/ceph:2aed1595b49f256bf6c2a2a510fa4ee002ed6cd0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c04749f0-c7d2-11ea-a06f-001a4aab830c -- ceph orch daemon add osd smithi158:vg_nvme/lv_4'

pass 5232662 2020-07-17 01:14:06 2020-07-17 01:55:32 2020-07-17 02:21:31 0:25:59 0:17:43 0:08:16 smithi master ubuntu 18.04 rados/basic/{ceph clusters/{fixed-2 openstack} msgr-failures/few msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{ubuntu_latest} tasks/rados_stress_watch} 2
pass 5232663 2020-07-17 01:14:07 2020-07-17 01:57:07 2020-07-17 02:29:07 0:32:00 0:26:48 0:05:12 smithi master centos 8.1 rados/singleton/{all/lost-unfound-delete msgr-failures/many msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{centos_8}} 1
pass 5232664 2020-07-17 01:14:08 2020-07-17 01:57:07 2020-07-17 02:07:06 0:09:59 0:03:45 0:06:14 smithi master centos 8.1 rados/cephadm/workunits/{distro/centos_latest task/test_cephadm_repos} 1
pass 5232665 2020-07-17 01:14:09 2020-07-17 01:57:08 2020-07-17 02:17:07 0:19:59 0:11:33 0:08:26 smithi master ubuntu 18.04 rados/perf/{ceph objectstore/bluestore-bitmap openstack settings/optimized ubuntu_latest workloads/radosbench_4K_rand_read} 1
pass 5232666 2020-07-17 01:14:10 2020-07-17 01:57:08 2020-07-17 02:13:07 0:15:59 0:07:33 0:08:26 smithi master ubuntu 18.04 rados/singleton-nomsgr/{all/ceph-kvstore-tool rados supported-random-distro$/{ubuntu_latest}} 1
pass 5232667 2020-07-17 01:14:11 2020-07-17 01:57:07 2020-07-17 02:17:06 0:19:59 0:10:31 0:09:28 smithi master ubuntu 18.04 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-partial-recovery} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} d-balancer/crush-compat msgr-failures/osd-delay msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{ubuntu_latest} thrashers/mapgap thrashosds-health workloads/cache} 2
pass 5232668 2020-07-17 01:14:12 2020-07-17 01:57:08 2020-07-17 02:23:07 0:25:59 0:07:06 0:18:53 smithi master ubuntu 18.04 rados/multimon/{clusters/21 msgr-failures/few msgr/async-v1only no_pools objectstore/bluestore-comp-zlib rados supported-random-distro$/{ubuntu_latest} tasks/mon_clock_no_skews} 3
pass 5232669 2020-07-17 01:14:13 2020-07-17 01:57:08 2020-07-17 02:39:08 0:42:00 0:25:51 0:16:09 smithi master centos 8.1 rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} msgr-failures/fastclose objectstore/bluestore-comp-zlib rados recovery-overrides/{default} supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} 3
pass 5232670 2020-07-17 01:14:14 2020-07-17 01:57:14 2020-07-17 02:31:14 0:34:00 0:25:33 0:08:27 smithi master centos 8.1 rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} msgr-failures/fastclose objectstore/bluestore-comp-zlib rados recovery-overrides/{default} supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} 2
pass 5232671 2020-07-17 01:14:16 2020-07-17 01:58:53 2020-07-17 02:34:53 0:36:00 0:16:39 0:19:21 smithi master centos 8.1 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-zlib rados tasks/rados_cls_all validater/lockdep} 2
fail 5232672 2020-07-17 01:14:17 2020-07-17 01:58:53 2020-07-17 02:20:53 0:22:00 0:10:34 0:11:26 smithi master rhel 8.0 rados/cephadm/smoke/{distro/rhel_8.0 fixed-2 start} 2
Failure Reason:

Command failed on smithi072 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:2aed1595b49f256bf6c2a2a510fa4ee002ed6cd0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 59eaefe4-c7d3-11ea-a06f-001a4aab830c -- ceph orch daemon add osd smithi072:vg_nvme/lv_4'

pass 5232673 2020-07-17 01:14:18 2020-07-17 01:58:56 2020-07-17 02:30:56 0:32:00 0:24:44 0:07:16 smithi master rhel 8.1 rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast msgr-failures/fastclose objectstore/bluestore-comp-zlib rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/ec-small-objects-balanced} 2
pass 5232674 2020-07-17 01:14:19 2020-07-17 02:01:17 2020-07-17 02:37:17 0:36:00 0:30:18 0:05:42 smithi master centos 8.1 rados/singleton/{all/lost-unfound msgr-failures/few msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_8}} 1
pass 5232675 2020-07-17 01:14:20 2020-07-17 02:01:17 2020-07-17 02:27:17 0:26:00 0:15:52 0:10:08 smithi master ubuntu 18.04 rados/monthrash/{ceph clusters/3-mons msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{ubuntu_latest} thrashers/one workloads/pool-create-delete} 2
fail 5232676 2020-07-17 01:14:21 2020-07-17 02:01:17 2020-07-17 02:43:18 0:42:01 0:23:59 0:18:02 smithi master centos 7.6 rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/mimic backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{centos_7.6} msgr-failures/osd-delay rados thrashers/pggrow thrashosds-health workloads/radosbench} 3
Failure Reason:

reached maximum tries (180) after waiting for 180 seconds

fail 5232677 2020-07-17 01:14:22 2020-07-17 02:01:17 2020-07-17 02:23:16 0:21:59 0:10:03 0:11:56 smithi master rhel 8.0 rados/cephadm/smoke-roleless/{distro/rhel_8.0 start} 2
Failure Reason:

Command failed on smithi085 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:2aed1595b49f256bf6c2a2a510fa4ee002ed6cd0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid be437cb8-c7d3-11ea-a06f-001a4aab830c -- ceph orch daemon add osd smithi085:vg_nvme/lv_4'

pass 5232678 2020-07-17 01:14:23 2020-07-17 02:01:24 2020-07-17 02:35:23 0:33:59 0:11:04 0:22:55 smithi master ubuntu 18.04 rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} msgr-failures/fastclose objectstore/bluestore-comp-zstd rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4
pass 5232679 2020-07-17 01:14:24 2020-07-17 02:03:00 2020-07-17 02:16:59 0:13:59 0:06:37 0:07:22 smithi master centos 8.1 rados/singleton/{all/max-pg-per-osd.from-mon msgr-failures/many msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{centos_8}} 1
fail 5232680 2020-07-17 01:14:25 2020-07-17 02:03:02 2020-07-17 02:23:01 0:19:59 0:12:47 0:07:12 smithi master ubuntu 18.04 rados/cephadm/with-work/{distro/ubuntu_18.04_podman fixed-2 mode/root msgr/async-v1only start tasks/rados_python} 2
Failure Reason:

Command failed on smithi022 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:2aed1595b49f256bf6c2a2a510fa4ee002ed6cd0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 8c81a768-c7d3-11ea-a06f-001a4aab830c -- ceph orch daemon add osd smithi022:vg_nvme/lv_4'

pass 5232681 2020-07-17 01:14:26 2020-07-17 02:03:13 2020-07-17 02:27:12 0:23:59 0:15:29 0:08:30 smithi master rhel 8.1 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-partial-recovery} backoff/normal ceph clusters/{fixed-2 openstack} d-balancer/on msgr-failures/fastclose msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{rhel_8} thrashers/morepggrow thrashosds-health workloads/dedup_tier} 2
fail 5232682 2020-07-17 01:14:27 2020-07-17 02:03:23 2020-07-17 02:33:23 0:30:00 0:19:46 0:10:14 smithi master centos 8.1 rados/dashboard/{clusters/{2-node-mgr} debug/mgr objectstore/bluestore-comp-snappy supported-random-distro$/{centos_8} tasks/dashboard} 2
Failure Reason:

Test failure: test_get_export (tasks.mgr.dashboard.test_ganesha.GaneshaTest)

pass 5232683 2020-07-17 01:14:28 2020-07-17 02:05:11 2020-07-17 02:19:10 0:13:59 0:06:52 0:07:07 smithi master centos 8.1 rados/objectstore/{backends/filejournal supported-random-distro$/{centos_8}} 1
pass 5232684 2020-07-17 01:14:30 2020-07-17 02:05:11 2020-07-17 02:23:10 0:17:59 0:09:52 0:08:07 smithi master ubuntu 18.04 rados/perf/{ceph objectstore/bluestore-comp openstack settings/optimized ubuntu_latest workloads/radosbench_4K_seq_read} 1
fail 5232685 2020-07-17 01:14:31 2020-07-17 02:05:16 2020-07-17 02:23:15 0:17:59 0:09:02 0:08:57 smithi master ubuntu 18.04 rados/cephadm/workunits/{distro/ubuntu_18.04_podman task/test_orch_cli} 1
Failure Reason:

Command failed on smithi029 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:2aed1595b49f256bf6c2a2a510fa4ee002ed6cd0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 98391168-c7d3-11ea-a06f-001a4aab830c -- ceph orch daemon add osd smithi029:vg_nvme/lv_4'

pass 5232686 2020-07-17 01:14:32 2020-07-17 02:07:04 2020-07-17 02:19:03 0:11:59 0:06:17 0:05:42 smithi master centos 8.1 rados/singleton-nomsgr/{all/ceph-post-file rados supported-random-distro$/{centos_8}} 1
pass 5232687 2020-07-17 01:14:33 2020-07-17 02:07:08 2020-07-17 02:21:07 0:13:59 0:07:33 0:06:26 smithi master centos 8.1 rados/basic/{ceph clusters/{fixed-2 openstack} msgr-failures/many msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_8} tasks/rados_striper} 2
pass 5232688 2020-07-17 01:14:34 2020-07-17 02:08:22 2020-07-17 02:24:21 0:15:59 0:07:26 0:08:33 smithi master centos 8.1 rados/mgr/{clusters/{2-node-mgr} debug/mgr objectstore/bluestore-comp-zstd supported-random-distro$/{centos_8} tasks/workunits} 2
pass 5232689 2020-07-17 01:14:35 2020-07-17 02:08:52 2020-07-17 02:30:51 0:21:59 0:15:51 0:06:08 smithi master rhel 8.1 rados/singleton/{all/max-pg-per-osd.from-primary msgr-failures/few msgr/async objectstore/filestore-xfs rados supported-random-distro$/{rhel_8}} 1
pass 5232690 2020-07-17 01:14:36 2020-07-17 02:08:57 2020-07-17 02:38:57 0:30:00 0:20:07 0:09:53 smithi master centos 8.1 rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/normal msgr-failures/fastclose rados recovery-overrides/{more-active-recovery} supported-random-distro$/{centos_8} thrashers/minsize_recovery thrashosds-health workloads/ec-small-objects-fast-read-overwrites} 2
fail 5232691 2020-07-17 01:14:37 2020-07-17 02:08:57 2020-07-17 02:30:57 0:22:00 0:10:08 0:11:52 smithi master rhel 8.1 rados/cephadm/smoke/{distro/rhel_latest fixed-2 start} 2
Failure Reason:

Command failed on smithi065 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:2aed1595b49f256bf6c2a2a510fa4ee002ed6cd0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d9f6e390-c7d4-11ea-a06f-001a4aab830c -- ceph orch daemon add osd smithi065:vg_nvme/lv_4'

pass 5232692 2020-07-17 01:14:38 2020-07-17 02:09:05 2020-07-17 02:47:06 0:38:01 0:22:48 0:15:13 smithi master ubuntu 18.04 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-partial-recovery} backoff/peering ceph clusters/{fixed-2 openstack} d-balancer/crush-compat msgr-failures/few msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{ubuntu_latest} thrashers/none thrashosds-health workloads/pool-snaps-few-objects} 2
fail 5232693 2020-07-17 01:14:39 2020-07-17 02:09:06 2020-07-17 02:27:05 0:17:59 0:09:05 0:08:54 smithi master rhel 8.1 rados/cephadm/smoke-roleless/{distro/rhel_latest start} 2
Failure Reason:

Command failed on smithi164 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:2aed1595b49f256bf6c2a2a510fa4ee002ed6cd0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 5f137454-c7d4-11ea-a06f-001a4aab830c -- ceph orch daemon add osd smithi164:vg_nvme/lv_4'

pass 5232694 2020-07-17 01:14:40 2020-07-17 02:10:15 2020-07-17 02:28:14 0:17:59 0:11:07 0:06:52 smithi master ubuntu 18.04 rados/singleton/{all/max-pg-per-osd.from-replica msgr-failures/many msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_latest}} 1
pass 5232695 2020-07-17 01:14:41 2020-07-17 02:10:57 2020-07-17 02:28:56 0:17:59 0:10:56 0:07:03 smithi master ubuntu 18.04 rados/perf/{ceph objectstore/bluestore-low-osd-mem-target openstack settings/optimized ubuntu_latest workloads/radosbench_4M_rand_read} 1
fail 5232696 2020-07-17 01:14:42 2020-07-17 02:10:58 2020-07-17 02:32:57 0:21:59 0:11:08 0:10:51 smithi master centos 8.0 rados/cephadm/with-work/{distro/centos_8.0 fixed-2 mode/packaged msgr/async-v1only start tasks/rados_python} 2
Failure Reason:

Command failed on smithi148 with status 22: 'sudo cephadm --image quay.ceph.io/ceph-ci/ceph:2aed1595b49f256bf6c2a2a510fa4ee002ed6cd0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 16d095d6-c7d5-11ea-a06f-001a4aab830c -- ceph orch daemon add osd smithi148:vg_nvme/lv_4'

pass 5232697 2020-07-17 01:14:43 2020-07-17 02:11:40 2020-07-17 02:29:39 0:17:59 0:06:25 0:11:34 smithi master centos 8.1 rados/multimon/{clusters/3 msgr-failures/many msgr/async-v2only no_pools objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8} tasks/mon_clock_with_skews} 2
pass 5232698 2020-07-17 01:14:45 2020-07-17 02:12:33 2020-07-17 03:20:35 1:08:02 0:11:02 0:57:00 smithi master ubuntu 18.04 rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} msgr-failures/few objectstore/bluestore-comp-zstd rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/ec-rados-plugin=lrc-k=4-m=2-l=3} 3
pass 5232699 2020-07-17 01:14:46 2020-07-17 02:13:07 2020-07-17 02:51:07 0:38:00 0:25:29 0:12:31 smithi master ubuntu 18.04 rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} msgr-failures/few objectstore/bluestore-comp-zstd rados recovery-overrides/{more-active-recovery} supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} 2
pass 5232700 2020-07-17 01:14:47 2020-07-17 02:13:07 2020-07-17 02:51:07 0:38:00 0:25:22 0:12:38 smithi master centos 8.1 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zstd rados tasks/mon_recovery validater/valgrind} 2
pass 5232701 2020-07-17 01:14:48 2020-07-17 02:13:07 2020-07-17 02:37:07 0:24:00 0:10:10 0:13:50 smithi master ubuntu 18.04 rados/standalone/{supported-random-distro$/{ubuntu_latest} workloads/mgr} 1
pass 5232702 2020-07-17 01:14:49 2020-07-17 02:13:07 2020-07-17 02:49:07 0:36:00 0:21:18 0:14:42 smithi master ubuntu 18.04 rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal msgr-failures/few objectstore/bluestore-comp-zstd rados recovery-overrides/{more-active-recovery} supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/ec-small-objects-fast-read} 2
pass 5232703 2020-07-17 01:14:50 2020-07-17 02:13:08 2020-07-17 02:33:08 0:20:00 0:09:54 0:10:06 smithi master ubuntu 18.04 rados/monthrash/{ceph clusters/9-mons msgr-failures/mon-delay msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{ubuntu_latest} thrashers/sync-many workloads/rados_5925} 2
pass 5232704 2020-07-17 01:14:51 2020-07-17 02:13:13 2020-07-17 02:53:13 0:40:00 0:24:27 0:15:33 smithi master centos 8.1 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-active-recovery} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} d-balancer/on msgr-failures/osd-delay msgr/async objectstore/filestore-xfs rados supported-random-distro$/{centos_8} thrashers/pggrow thrashosds-health workloads/rados_api_tests} 2
pass 5232705 2020-07-17 01:14:53 2020-07-17 02:13:15 2020-07-17 02:31:15 0:18:00 0:11:13 0:06:47 smithi master rhel 8.1 rados/singleton-nomsgr/{all/export-after-evict rados supported-random-distro$/{rhel_8}} 1
pass 5232706 2020-07-17 01:14:54 2020-07-17 02:13:24 2020-07-17 04:31:28 2:18:04 0:15:47 2:02:17 smithi master rhel 8.1 rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} msgr-failures/few objectstore/bluestore-hybrid rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4
pass 5232707 2020-07-17 01:14:55 2020-07-17 02:15:22 2020-07-17 02:29:21 0:13:59 0:08:47 0:05:12 smithi master ubuntu 18.04 rados/cephadm/workunits/{distro/ubuntu_18.04_podman task/test_adoption} 1
pass 5232708 2020-07-17 01:14:56 2020-07-17 02:15:24 2020-07-17 02:31:24 0:16:00 0:07:57 0:08:03 smithi master ubuntu 18.04 rados/singleton/{all/mon-auth-caps msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{ubuntu_latest}} 1
fail 5232709 2020-07-17 01:14:57 2020-07-17 02:17:03 2020-07-17 02:59:03 0:42:00 0:14:39 0:27:21 smithi master centos 7.6 rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus-v1only backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{centos_7.6} msgr-failures/fastclose rados thrashers/careful thrashosds-health workloads/rbd_cls} 3
Failure Reason:

Command failed on smithi143 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:2aed1595b49f256bf6c2a2a510fa4ee002ed6cd0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 9a6948e0-c7d8-11ea-a06f-001a4aab830c -- ceph orch daemon add osd smithi143:vg_nvme/lv_4'

fail 5232710 2020-07-17 01:14:58 2020-07-17 02:17:03 2020-07-17 02:35:03 0:18:00 0:06:05 0:11:55 smithi master ubuntu 18.04 rados/cephadm/smoke/{distro/ubuntu_18.04 fixed-2 start} 2
Failure Reason:

Command failed on smithi167 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:2aed1595b49f256bf6c2a2a510fa4ee002ed6cd0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 540f6e7c-c7d5-11ea-a06f-001a4aab830c -- ceph orch daemon add osd smithi167:vg_nvme/lv_4'

pass 5232711 2020-07-17 01:14:59 2020-07-17 02:17:04 2020-07-17 02:59:04 0:42:00 0:29:12 0:12:48 smithi master centos 8.1 rados/basic/{ceph clusters/{fixed-2 openstack} msgr-failures/few msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{centos_8} tasks/rados_workunit_loadgen_big} 2
pass 5232712 2020-07-17 01:15:00 2020-07-17 02:17:08 2020-07-17 02:33:07 0:15:59 0:08:55 0:07:04 smithi master ubuntu 18.04 rados/singleton/{all/mon-config-key-caps msgr-failures/many msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest}} 1
fail 5232713 2020-07-17 01:15:02 2020-07-17 02:17:09 2020-07-17 02:31:08 0:13:59 0:06:02 0:07:57 smithi master ubuntu 18.04 rados/cephadm/smoke-roleless/{distro/ubuntu_18.04 start} 2
Failure Reason:

Command failed on smithi083 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:2aed1595b49f256bf6c2a2a510fa4ee002ed6cd0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e9541c68-c7d4-11ea-a06f-001a4aab830c -- ceph orch daemon add osd smithi083:vg_nvme/lv_4'

pass 5232714 2020-07-17 01:15:03 2020-07-17 02:18:47 2020-07-17 03:10:48 0:52:01 0:43:40 0:08:21 smithi master rhel 8.1 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-partial-recovery} backoff/normal ceph clusters/{fixed-2 openstack} d-balancer/crush-compat msgr-failures/fastclose msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/radosbench-high-concurrency} 2
pass 5232715 2020-07-17 01:15:04 2020-07-17 02:18:52 2020-07-17 02:40:52 0:22:00 0:15:02 0:06:58 smithi master rhel 8.1 rados/mgr/{clusters/{2-node-mgr} debug/mgr objectstore/bluestore-hybrid supported-random-distro$/{rhel_8} tasks/crash} 2
pass 5232716 2020-07-17 01:15:05 2020-07-17 02:19:02 2020-07-17 02:35:01 0:15:59 0:09:57 0:06:02 smithi master ubuntu 18.04 rados/perf/{ceph objectstore/bluestore-stupid openstack settings/optimized ubuntu_latest workloads/radosbench_4M_seq_read} 1
pass 5232717 2020-07-17 01:15:07 2020-07-17 02:19:03 2020-07-17 03:01:03 0:42:00 0:34:40 0:07:20 smithi master rhel 8.1 rados/singleton-bluestore/{all/cephtool msgr-failures/few msgr/async-v2only objectstore/bluestore-bitmap rados supported-random-distro$/{rhel_8}} 1
fail 5232718 2020-07-17 01:15:08 2020-07-17 02:19:05 2020-07-17 02:49:05 0:30:00 0:20:53 0:09:07 smithi master ubuntu 18.04 rados/dashboard/{clusters/{2-node-mgr} debug/mgr objectstore/bluestore-comp-zlib supported-random-distro$/{ubuntu_latest} tasks/dashboard} 2
Failure Reason:

Test failure: test_get_export (tasks.mgr.dashboard.test_ganesha.GaneshaTest)

pass 5232719 2020-07-17 01:15:09 2020-07-17 02:19:11 2020-07-17 05:03:16 2:44:05 2:38:38 0:05:27 smithi master rhel 8.1 rados/objectstore/{backends/filestore-idempotent-aio-journal supported-random-distro$/{rhel_8}} 1
pass 5232720 2020-07-17 01:15:10 2020-07-17 02:20:01 2020-07-17 02:44:01 0:24:00 0:17:49 0:06:11 smithi master centos 8.1 rados/valgrind-leaks/{1-start 2-inject-leak/none centos_latest} 1
pass 5232721 2020-07-17 01:15:11 2020-07-17 02:20:54 2020-07-17 02:38:54 0:18:00 0:12:20 0:05:40 smithi master centos 8.1 rados/cephadm/workunits/{distro/centos_latest task/test_cephadm} 1
pass 5232722 2020-07-17 01:15:12 2020-07-17 02:21:22 2020-07-17 02:35:22 0:14:00 0:07:51 0:06:09 smithi master ubuntu 18.04 rados/singleton-nomsgr/{all/full-tiering rados supported-random-distro$/{ubuntu_latest}} 1
pass 5232723 2020-07-17 01:15:13 2020-07-17 02:21:22 2020-07-17 02:41:22 0:20:00 0:12:57 0:07:03 smithi master centos 8.1 rados/singleton/{all/mon-config-keys msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_8}} 1
fail 5232724 2020-07-17 01:15:14 2020-07-17 02:21:32 2020-07-17 02:37:32 0:16:00 0:09:30 0:06:30 smithi master centos 8.1 rados/cephadm/with-work/{distro/centos_latest fixed-2 mode/root msgr/async-v2only start tasks/rados_api_tests} 2
Failure Reason:

Command failed on smithi103 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:2aed1595b49f256bf6c2a2a510fa4ee002ed6cd0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid b0e5ce16-c7d5-11ea-a06f-001a4aab830c -- ceph orch daemon add osd smithi103:vg_nvme/lv_4'

pass 5232725 2020-07-17 01:15:16 2020-07-17 02:23:20 2020-07-17 04:03:22 1:40:02 1:30:15 0:09:47 smithi master ubuntu 18.04 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-recovery} backoff/peering ceph clusters/{fixed-2 openstack} d-balancer/on msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/radosbench} 2
fail 5232726 2020-07-17 01:15:17 2020-07-17 02:23:20 2020-07-17 02:41:19 0:17:59 0:08:26 0:09:33 smithi master ubuntu 18.04 rados/cephadm/smoke/{distro/ubuntu_18.04_podman fixed-2 start} 2
Failure Reason:

Command failed on smithi035 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:2aed1595b49f256bf6c2a2a510fa4ee002ed6cd0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 390f6a7c-c7d6-11ea-a06f-001a4aab830c -- ceph orch daemon add osd smithi035:vg_nvme/lv_4'

pass 5232727 2020-07-17 01:15:18 2020-07-17 02:23:20 2020-07-17 02:43:20 0:20:00 0:11:22 0:08:38 smithi master ubuntu 18.04 rados/multimon/{clusters/6 msgr-failures/few msgr/async no_pools objectstore/bluestore-hybrid rados supported-random-distro$/{ubuntu_latest} tasks/mon_recovery} 2
pass 5232728 2020-07-17 01:15:19 2020-07-17 02:23:20 2020-07-17 03:03:20 0:40:00 0:29:15 0:10:45 smithi master rhel 8.1 rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} msgr-failures/osd-delay objectstore/bluestore-hybrid rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{rhel_8} thrashers/fastread thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} 3
pass 5232729 2020-07-17 01:15:20 2020-07-17 02:23:20 2020-07-17 02:57:20 0:34:00 0:22:55 0:11:05 smithi master centos 8.1 rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} msgr-failures/osd-delay objectstore/bluestore-hybrid rados recovery-overrides/{more-async-recovery} supported-random-distro$/{centos_8} thrashers/mapgap thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} 2
pass 5232730 2020-07-17 01:15:21 2020-07-17 02:23:20 2020-07-17 02:53:20 0:30:00 0:21:13 0:08:47 smithi master centos 8.1 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} msgr-failures/few msgr/async objectstore/bluestore-hybrid rados tasks/rados_api_tests validater/lockdep} 2
pass 5232731 2020-07-17 01:15:22 2020-07-17 02:24:47 2020-07-17 02:54:46 0:29:59 0:23:41 0:06:18 smithi master rhel 8.1 rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast msgr-failures/osd-delay objectstore/bluestore-hybrid rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{rhel_8} thrashers/fastread thrashosds-health workloads/ec-small-objects-many-deletes} 2
pass 5232732 2020-07-17 01:15:23 2020-07-17 02:27:18 2020-07-17 02:41:16 0:13:58 0:07:44 0:06:14 smithi master centos 8.1 rados/singleton/{all/mon-config msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8}} 1
pass 5232733 2020-07-17 01:15:24 2020-07-17 02:27:18 2020-07-17 02:43:17 0:15:59 0:09:22 0:06:37 smithi master ubuntu 18.04 rados/perf/{ceph objectstore/bluestore-basic-min-osd-mem-target openstack settings/optimized ubuntu_latest workloads/radosbench_4M_write} 1
pass 5232734 2020-07-17 01:15:26 2020-07-17 02:27:18 2020-07-17 02:57:17 0:29:59 0:22:46 0:07:13 smithi master centos 8.1 rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/fast msgr-failures/few rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{centos_8} thrashers/morepggrow thrashosds-health workloads/ec-small-objects-overwrites} 2
pass 5232735 2020-07-17 01:15:27 2020-07-17 02:27:18 2020-07-17 02:47:17 0:19:59 0:10:56 0:09:03 smithi master centos 8.1 rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} msgr-failures/osd-delay objectstore/bluestore-low-osd-mem-target rados recovery-overrides/{default} supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4
pass 5232736 2020-07-17 01:15:28 2020-07-17 02:27:18 2020-07-17 03:01:18 0:34:00 0:22:59 0:11:01 smithi master centos 8.1 rados/monthrash/{ceph clusters/3-mons msgr-failures/few msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{centos_8} thrashers/sync workloads/rados_api_tests} 2
fail 5232737 2020-07-17 01:15:29 2020-07-17 02:28:16 2020-07-17 02:44:15 0:15:59 0:07:50 0:08:09 smithi master ubuntu 18.04 rados/cephadm/smoke-roleless/{distro/ubuntu_18.04_podman start} 2
Failure Reason:

Command failed on smithi190 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:2aed1595b49f256bf6c2a2a510fa4ee002ed6cd0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid a7055e10-c7d6-11ea-a06f-001a4aab830c -- ceph orch daemon add osd smithi190:vg_nvme/lv_4'

pass 5232738 2020-07-17 01:15:30 2020-07-17 02:28:59 2020-07-17 03:00:58 0:31:59 0:24:12 0:07:47 smithi master rhel 8.1 rados/basic/{ceph clusters/{fixed-2 openstack} msgr-failures/many msgr/async objectstore/filestore-xfs rados supported-random-distro$/{rhel_8} tasks/rados_workunit_loadgen_mix} 2
pass 5232739 2020-07-17 01:15:31 2020-07-17 02:28:58 2020-07-17 02:54:58 0:26:00 0:18:04 0:07:56 smithi master rhel 8.1 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-recovery} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} d-balancer/crush-compat msgr-failures/osd-delay msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{rhel_8} thrashers/mapgap thrashosds-health workloads/redirect} 2
fail 5232740 2020-07-17 01:15:33 2020-07-17 02:29:03 2020-07-17 03:05:03 0:36:00 0:25:06 0:10:54 smithi master centos 7.6 rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/nautilus-v2only backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{centos_7.6} msgr-failures/few rados thrashers/default thrashosds-health workloads/snaps-few-objects} 3
Failure Reason:

reached maximum tries (180) after waiting for 180 seconds

pass 5232741 2020-07-17 01:15:34 2020-07-17 02:29:08 2020-07-17 02:39:07 0:09:59 0:03:54 0:06:05 smithi master ubuntu 18.04 rados/cephadm/workunits/{distro/ubuntu_18.04_podman task/test_cephadm_repos} 1
pass 5232742 2020-07-17 01:15:35 2020-07-17 02:29:24 2020-07-17 02:49:22 0:19:58 0:14:00 0:05:58 smithi master rhel 8.1 rados/singleton-nomsgr/{all/health-warnings rados supported-random-distro$/{rhel_8}} 1
pass 5232743 2020-07-17 01:15:36 2020-07-17 02:29:40 2020-07-17 02:51:40 0:22:00 0:15:34 0:06:26 smithi master centos 8.1 rados/singleton/{all/osd-backfill msgr-failures/few msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{centos_8}} 1
fail 5232744 2020-07-17 01:15:37 2020-07-17 02:31:07 2020-07-17 02:59:07 0:28:00 0:17:10 0:10:50 smithi master rhel 8.0 rados/cephadm/with-work/{distro/rhel_8.0 fixed-2 mode/packaged msgr/async start tasks/rados_python} 2
Failure Reason:

Command failed on smithi029 with status 22: 'sudo cephadm --image quay.ceph.io/ceph-ci/ceph:2aed1595b49f256bf6c2a2a510fa4ee002ed6cd0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 7305dc96-c7d8-11ea-a06f-001a4aab830c -- ceph orch daemon add osd smithi029:vg_nvme/lv_4'

pass 5232745 2020-07-17 01:15:38 2020-07-17 02:31:08 2020-07-17 03:01:08 0:30:00 0:17:19 0:12:41 smithi master rhel 8.1 rados/mgr/{clusters/{2-node-mgr} debug/mgr objectstore/bluestore-low-osd-mem-target supported-random-distro$/{rhel_8} tasks/failover} 2
pass 5232746 2020-07-17 01:15:40 2020-07-17 02:31:08 2020-07-17 03:05:08 0:34:00 0:28:01 0:05:59 smithi master rhel 8.1 rados/standalone/{supported-random-distro$/{rhel_8} workloads/misc} 1
pass 5232747 2020-07-17 01:15:41 2020-07-17 02:31:09 2020-07-17 03:01:09 0:30:00 0:24:08 0:05:52 smithi master ubuntu 18.04 rados/perf/{ceph objectstore/bluestore-bitmap openstack settings/optimized ubuntu_latest workloads/radosbench_omap_write} 1
pass 5232748 2020-07-17 01:15:42 2020-07-17 02:31:14 2020-07-17 02:53:13 0:21:59 0:13:38 0:08:21 smithi master ubuntu 18.04 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-active-recovery} backoff/normal ceph clusters/{fixed-2 openstack} d-balancer/on msgr-failures/fastclose msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{ubuntu_latest} thrashers/morepggrow thrashosds-health workloads/redirect_promote_tests} 2
fail 5232749 2020-07-17 01:15:43 2020-07-17 02:31:16 2020-07-17 02:45:15 0:13:59 0:06:08 0:07:51 smithi master ubuntu 18.04 rados/cephadm/smoke/{distro/ubuntu_latest fixed-2 start} 2
Failure Reason:

Command failed on smithi145 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:2aed1595b49f256bf6c2a2a510fa4ee002ed6cd0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e9961120-c7d6-11ea-a06f-001a4aab830c -- ceph orch daemon add osd smithi145:vg_nvme/lv_4'

pass 5232750 2020-07-17 01:15:44 2020-07-17 02:31:16 2020-07-17 02:55:16 0:24:00 0:17:53 0:06:07 smithi master centos 8.1 rados/singleton/{all/osd-recovery-incomplete msgr-failures/many msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_8}} 1
fail 5232751 2020-07-17 01:15:45 2020-07-17 02:31:16 2020-07-17 03:01:16 0:30:00 0:21:01 0:08:59 smithi master ubuntu 18.04 rados/dashboard/{clusters/{2-node-mgr} debug/mgr objectstore/bluestore-comp-zstd supported-random-distro$/{ubuntu_latest} tasks/dashboard} 2
Failure Reason:

Test failure: test_get_export (tasks.mgr.dashboard.test_ganesha.GaneshaTest)

pass 5232752 2020-07-17 01:15:46 2020-07-17 02:31:26 2020-07-17 04:55:30 2:24:04 2:17:08 0:06:56 smithi master ubuntu 18.04 rados/objectstore/{backends/filestore-idempotent supported-random-distro$/{ubuntu_latest}} 1
fail 5232753 2020-07-17 01:15:47 2020-07-17 02:33:22 2020-07-17 02:49:20 0:15:58 0:06:27 0:09:31 smithi master ubuntu 18.04 rados/cephadm/smoke-roleless/{distro/ubuntu_latest start} 2
Failure Reason:

Command failed on smithi033 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:2aed1595b49f256bf6c2a2a510fa4ee002ed6cd0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 5aff6b22-c7d7-11ea-a06f-001a4aab830c -- ceph orch daemon add osd smithi033:vg_nvme/lv_4'

pass 5232754 2020-07-17 01:15:49 2020-07-17 02:33:22 2020-07-17 02:47:20 0:13:58 0:07:34 0:06:24 smithi master ubuntu 18.04 rados/singleton-nomsgr/{all/large-omap-object-warnings rados supported-random-distro$/{ubuntu_latest}} 1
pass 5232755 2020-07-17 01:15:50 2020-07-17 02:33:21 2020-07-17 02:57:20 0:23:59 0:17:14 0:06:45 smithi master rhel 8.1 rados/singleton/{all/osd-recovery msgr-failures/few msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{rhel_8}} 1
fail 5232756 2020-07-17 01:15:51 2020-07-17 02:33:22 2020-07-17 02:47:20 0:13:58 0:07:26 0:06:32 smithi master centos 8.1 rados/cephadm/workunits/{distro/centos_latest task/test_orch_cli} 1
Failure Reason:

Command failed on smithi167 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:2aed1595b49f256bf6c2a2a510fa4ee002ed6cd0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 40d8d288-c7d7-11ea-a06f-001a4aab830c -- ceph orch daemon add osd smithi167:vg_nvme/lv_4'

pass 5232757 2020-07-17 01:15:52 2020-07-17 02:33:27 2020-07-17 02:51:26 0:17:59 0:11:19 0:06:40 smithi master centos 8.1 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-partial-recovery} backoff/peering ceph clusters/{fixed-2 openstack} d-balancer/crush-compat msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8} thrashers/none thrashosds-health workloads/redirect_set_object} 2
pass 5232758 2020-07-17 01:15:54 2020-07-17 02:35:17 2020-07-17 02:57:16 0:21:59 0:11:50 0:10:09 smithi master rhel 8.1 rados/multimon/{clusters/9 msgr-failures/many msgr/async-v1only no_pools objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{rhel_8} tasks/mon_clock_no_skews} 3
pass 5232759 2020-07-17 01:15:55 2020-07-17 02:35:18 2020-07-17 03:21:17 0:45:59 0:10:42 0:35:17 smithi master ubuntu 18.04 rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} msgr-failures/fastclose objectstore/bluestore-low-osd-mem-target rados recovery-overrides/{more-async-recovery} supported-random-distro$/{ubuntu_latest} thrashers/mapgap thrashosds-health workloads/ec-rados-plugin=lrc-k=4-m=2-l=3} 3
pass 5232760 2020-07-17 01:15:56 2020-07-17 02:35:17 2020-07-17 03:07:17 0:32:00 0:25:10 0:06:50 smithi master centos 8.1 rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} msgr-failures/fastclose objectstore/bluestore-low-osd-mem-target rados recovery-overrides/{default} supported-random-distro$/{centos_8} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} 2
pass 5232761 2020-07-17 01:15:57 2020-07-17 02:35:19 2020-07-17 04:51:21 2:16:02 2:08:10 0:07:52 smithi master centos 8.1 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none msgr-failures/few msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados tasks/rados_cls_all validater/valgrind} 2
pass 5232762 2020-07-17 01:15:58 2020-07-17 02:35:23 2020-07-17 03:05:23 0:30:00 0:11:19 0:18:41 smithi master ubuntu 18.04 rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} msgr-failures/fastclose objectstore/bluestore-stupid rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4
pass 5232763 2020-07-17 01:16:00 2020-07-17 02:35:25 2020-07-17 03:15:25 0:40:00 0:32:25 0:07:35 smithi master rhel 8.1 rados/basic/{ceph clusters/{fixed-2 openstack} msgr-failures/few msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{rhel_8} tasks/rados_workunit_loadgen_mostlyread} 2
pass 5232764 2020-07-17 01:16:01 2020-07-17 02:37:02 2020-07-17 03:03:02 0:26:00 0:19:10 0:06:50 smithi master centos 8.1 rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal msgr-failures/fastclose objectstore/bluestore-low-osd-mem-target rados recovery-overrides/{default} supported-random-distro$/{centos_8} thrashers/minsize_recovery thrashosds-health workloads/ec-small-objects} 2
pass 5232765 2020-07-17 01:16:02 2020-07-17 02:37:03 2020-07-17 03:33:03 0:56:00 0:49:00 0:07:00 smithi master rhel 8.1 rados/monthrash/{ceph clusters/9-mons msgr-failures/mon-delay msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{rhel_8} thrashers/force-sync-many workloads/rados_mon_osdmap_prune} 2
fail 5232766 2020-07-17 01:16:04 2020-07-17 02:37:05 2020-07-17 02:59:04 0:21:59 0:15:07 0:06:52 smithi master rhel 8.1 rados/cephadm/with-work/{distro/rhel_latest fixed-2 mode/root msgr/async-v1only start tasks/rados_api_tests} 2
Failure Reason:

Command failed on smithi109 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:2aed1595b49f256bf6c2a2a510fa4ee002ed6cd0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 7a850604-c7d8-11ea-a06f-001a4aab830c -- ceph orch daemon add osd smithi109:vg_nvme/lv_4'

pass 5232767 2020-07-17 01:16:05 2020-07-17 02:37:09 2020-07-17 02:57:08 0:19:59 0:13:11 0:06:48 smithi master ubuntu 18.04 rados/perf/{ceph objectstore/bluestore-comp openstack settings/optimized ubuntu_latest workloads/sample_fio} 1
pass 5232768 2020-07-17 01:16:06 2020-07-17 02:37:17 2020-07-17 02:51:16 0:13:59 0:07:30 0:06:29 smithi master ubuntu 18.04 rados/singleton/{all/peer msgr-failures/many msgr/async objectstore/filestore-xfs rados supported-random-distro$/{ubuntu_latest}} 1
pass 5232769 2020-07-17 01:16:07 2020-07-17 02:37:18 2020-07-17 02:55:18 0:18:00 0:11:40 0:06:20 smithi master centos 8.1 rados/cephadm/orchestrator_cli/{2-node-mgr orchestrator_cli supported-random-distro$/{centos_8}} 2
pass 5232770 2020-07-17 01:16:09 2020-07-17 02:37:20 2020-07-17 02:57:20 0:20:00 0:11:00 0:09:00 smithi master ubuntu 18.04 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{default} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} d-balancer/on msgr-failures/osd-delay msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/set-chunk-promote-flush} 2
fail 5232771 2020-07-17 01:16:10 2020-07-17 02:37:33 2020-07-17 03:03:33 0:26:00 0:13:54 0:12:06 smithi master centos 7.6 rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{centos_7.6} msgr-failures/osd-delay rados thrashers/mapgap thrashosds-health workloads/test_rbd_api} 3
Failure Reason:

Command failed on smithi040 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:2aed1595b49f256bf6c2a2a510fa4ee002ed6cd0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 21d36c84-c7d9-11ea-a06f-001a4aab830c -- ceph orch daemon add osd smithi040:vg_nvme/lv_4'

fail 5232772 2020-07-17 01:16:11 2020-07-17 02:39:03 2020-07-17 03:03:00 0:23:57 0:07:45 0:16:12 smithi master centos 8.0 rados/cephadm/smoke/{distro/centos_8.0 fixed-2 start} 2
Failure Reason:

Command failed on smithi168 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:2aed1595b49f256bf6c2a2a510fa4ee002ed6cd0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 39ff906c-c7d9-11ea-a06f-001a4aab830c -- ceph orch daemon add osd smithi168:vg_nvme/lv_4'

pass 5232773 2020-07-17 01:16:12 2020-07-17 02:39:02 2020-07-17 03:03:00 0:23:58 0:16:13 0:07:45 smithi master rhel 8.1 rados/mgr/{clusters/{2-node-mgr} debug/mgr objectstore/bluestore-stupid supported-random-distro$/{rhel_8} tasks/insights} 2
pass 5232774 2020-07-17 01:16:13 2020-07-17 02:39:02 2020-07-17 03:13:01 0:33:59 0:25:45 0:08:14 smithi master centos 8.1 rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/normal msgr-failures/osd-delay rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{centos_8} thrashers/pggrow thrashosds-health workloads/ec-snaps-few-objects-overwrites} 2
pass 5232775 2020-07-17 01:16:15 2020-07-17 02:39:02 2020-07-17 02:57:00 0:17:58 0:10:23 0:07:35 smithi master centos 8.1 rados/singleton/{all/pg-autoscaler msgr-failures/few msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{centos_8}} 2
pass 5232776 2020-07-17 01:16:16 2020-07-17 02:39:03 2020-07-17 02:55:01 0:15:58 0:08:41 0:07:17 smithi master ubuntu 18.04 rados/singleton-nomsgr/{all/lazy_omap_stats_output rados supported-random-distro$/{ubuntu_latest}} 1
fail 5232777 2020-07-17 01:16:17 2020-07-17 02:39:10 2020-07-17 02:57:09 0:17:59 0:07:55 0:10:04 smithi master centos 8.0 rados/cephadm/smoke-roleless/{distro/centos_8.0 start} 2
Failure Reason:

Command failed on smithi190 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:2aed1595b49f256bf6c2a2a510fa4ee002ed6cd0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 8106c5a8-c7d8-11ea-a06f-001a4aab830c -- ceph orch daemon add osd smithi190:vg_nvme/lv_4'

pass 5232778 2020-07-17 01:16:18 2020-07-17 02:39:12 2020-07-17 02:57:10 0:17:58 0:09:58 0:08:00 smithi master ubuntu 18.04 rados/perf/{ceph objectstore/bluestore-low-osd-mem-target openstack settings/optimized ubuntu_latest workloads/sample_radosbench} 1
pass 5232779 2020-07-17 01:16:19 2020-07-17 02:41:15 2020-07-17 03:09:15 0:28:00 0:19:37 0:08:23 smithi master rhel 8.1 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} backoff/normal ceph clusters/{fixed-2 openstack} d-balancer/crush-compat msgr-failures/fastclose msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/set-chunks-read} 2
dead 5232780 2020-07-17 01:16:21 2020-07-17 02:41:18 2020-07-17 14:43:51 12:02:33 smithi master ubuntu 18.04 rados/cephadm/upgrade/{1-start 2-start-upgrade 3-wait distro$/{ubuntu_18.04} fixed-2} 2
pass 5232781 2020-07-17 01:16:22 2020-07-17 02:41:21 2020-07-17 03:17:21 0:36:00 0:29:17 0:06:43 smithi master centos 8.1 rados/singleton-bluestore/{all/cephtool msgr-failures/many msgr/async objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8}} 1
fail 5232782 2020-07-17 01:16:23 2020-07-17 02:41:23 2020-07-17 03:09:23 0:28:00 0:19:30 0:08:30 smithi master centos 8.1 rados/dashboard/{clusters/{2-node-mgr} debug/mgr objectstore/bluestore-hybrid supported-random-distro$/{centos_8} tasks/dashboard} 2
Failure Reason:

Test failure: test_get_export (tasks.mgr.dashboard.test_ganesha.GaneshaTest)

pass 5232783 2020-07-17 01:16:24 2020-07-17 02:44:40 2020-07-17 02:56:38 0:11:58 0:06:23 0:05:35 smithi master ubuntu 18.04 rados/objectstore/{backends/fusestore supported-random-distro$/{ubuntu_latest}} 1
pass 5232784 2020-07-17 01:16:25 2020-07-17 02:44:40 2020-07-17 03:02:39 0:17:59 0:11:22 0:06:37 smithi master rhel 8.1 rados/singleton/{all/pg-removal-interruption msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8}} 1
pass 5232785 2020-07-17 01:16:26 2020-07-17 02:45:43 2020-07-17 03:05:43 0:20:00 0:11:16 0:08:44 smithi master ubuntu 18.04 rados/basic/{ceph clusters/{fixed-2 openstack} msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{ubuntu_latest} tasks/readwrite} 2
pass 5232786 2020-07-17 01:16:27 2020-07-17 02:45:51 2020-07-17 02:59:50 0:13:59 0:07:24 0:06:35 smithi master centos 8.1 rados/cephadm/workunits/{distro/centos_latest task/test_adoption} 1
pass 5232787 2020-07-17 01:16:28 2020-07-17 02:45:52 2020-07-17 03:45:52 1:00:00 0:53:28 0:06:32 smithi master rhel 8.1 rados/standalone/{supported-random-distro$/{rhel_8} workloads/mon} 1
pass 5232788 2020-07-17 01:16:29 2020-07-17 02:45:52 2020-07-17 03:13:51 0:27:59 0:10:21 0:17:38 smithi master ubuntu 18.04 rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} msgr-failures/few objectstore/filestore-xfs rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4
pass 5232789 2020-07-17 01:16:31 2020-07-17 05:28:01 2020-07-17 05:46:00 0:17:59 0:12:11 0:05:48 smithi master rhel 8.1 rados/multimon/{clusters/21 msgr-failures/few msgr/async-v2only no_pools objectstore/bluestore-stupid rados supported-random-distro$/{rhel_8} tasks/mon_clock_with_skews} 3
pass 5232790 2020-07-17 01:16:32 2020-07-17 05:28:06 2020-07-17 06:04:06 0:36:00 0:29:21 0:06:39 smithi master ubuntu 18.04 rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} msgr-failures/few objectstore/bluestore-stupid rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} 3
pass 5232791 2020-07-17 01:16:33 2020-07-17 05:28:09 2020-07-17 06:00:09 0:32:00 0:24:23 0:07:37 smithi master ubuntu 18.04 rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} msgr-failures/few objectstore/bluestore-stupid rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/none thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} 2
pass 5232792 2020-07-17 01:16:34 2020-07-17 05:28:12 2020-07-17 05:46:11 0:17:59 0:11:40 0:06:19 smithi master centos 8.1 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} msgr-failures/few msgr/async-v2only objectstore/bluestore-stupid rados tasks/mon_recovery validater/lockdep} 2
pass 5232793 2020-07-17 01:16:35 2020-07-17 05:30:18 2020-07-17 06:04:18 0:34:00 0:27:01 0:06:59 smithi master ubuntu 18.04 rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast msgr-failures/few objectstore/bluestore-stupid rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=clay-k=4-m=2} 2
pass 5232794 2020-07-17 01:16:36 2020-07-17 05:30:18 2020-07-17 05:50:18 0:20:00 0:14:17 0:05:43 smithi master ubuntu 18.04 rados/singleton/{all/radostool msgr-failures/few msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest}} 1
fail 5232795 2020-07-17 01:16:37 2020-07-17 05:30:19 2020-07-17 05:48:18 0:17:59 0:10:30 0:07:29 smithi master ubuntu 18.04 rados/cephadm/with-work/{distro/ubuntu_18.04 fixed-2 mode/packaged msgr/async-v2only start tasks/rados_python} 2
Failure Reason:

Command failed on smithi006 with status 22: 'sudo cephadm --image quay.ceph.io/ceph-ci/ceph:2aed1595b49f256bf6c2a2a510fa4ee002ed6cd0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 07de9526-c7f0-11ea-a06f-001a4aab830c -- ceph orch daemon add osd smithi006:vg_nvme/lv_4'

pass 5232796 2020-07-17 01:16:38 2020-07-17 05:30:18 2020-07-17 05:50:18 0:20:00 0:13:48 0:06:12 smithi master rhel 8.1 rados/singleton-nomsgr/{all/librados_hello_world rados supported-random-distro$/{rhel_8}} 1
pass 5232797 2020-07-17 01:16:40 2020-07-17 05:30:18 2020-07-17 06:10:19 0:40:01 0:32:52 0:07:09 smithi master rhel 8.1 rados/monthrash/{ceph clusters/3-mons msgr-failures/few msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{rhel_8} thrashers/many workloads/rados_mon_workunits} 2
pass 5232798 2020-07-17 01:16:41 2020-07-17 05:30:19 2020-07-17 06:00:18 0:29:59 0:24:20 0:05:39 smithi master centos 8.1 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{default} backoff/peering ceph clusters/{fixed-2 openstack} d-balancer/on msgr-failures/few msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/small-objects-balanced} 2
pass 5232799 2020-07-17 01:16:42 2020-07-17 05:30:19 2020-07-17 05:54:18 0:23:59 0:17:25 0:06:34 smithi master ubuntu 18.04 rados/perf/{ceph objectstore/bluestore-low-osd-mem-target openstack settings/optimized ubuntu_latest workloads/cosbench_64K_read_write} 1
pass 5232800 2020-07-17 01:16:43 2020-07-17 05:30:20 2020-07-17 05:54:19 0:23:59 0:18:20 0:05:39 smithi master ubuntu 18.04 rados/cephadm/workunits/{distro/ubuntu_18.04_podman task/test_cephadm} 1
pass 5232801 2020-07-17 01:16:44 2020-07-17 05:31:59 2020-07-17 06:09:59 0:38:00 0:31:46 0:06:14 smithi master rhel 8.1 rados/singleton/{all/random-eio msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{rhel_8}} 2
fail 5232802 2020-07-17 01:16:45 2020-07-17 05:31:59 2020-07-17 05:45:58 0:13:59 0:06:22 0:07:37 smithi master centos 8.1 rados/cephadm/smoke/{distro/centos_latest fixed-2 start} 2
Failure Reason:

Command failed on smithi068 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:2aed1595b49f256bf6c2a2a510fa4ee002ed6cd0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 0cae7896-c7f0-11ea-a06f-001a4aab830c -- ceph orch daemon add osd smithi068:vg_nvme/lv_4'

pass 5232803 2020-07-17 01:16:46 2020-07-17 05:31:59 2020-07-17 06:09:59 0:38:00 0:30:35 0:07:25 smithi master centos 8.1 rados/mgr/{clusters/{2-node-mgr} debug/mgr objectstore/filestore-xfs supported-random-distro$/{centos_8} tasks/module_selftest} 2
pass 5232804 2020-07-17 01:16:48 2020-07-17 05:32:06 2020-07-17 06:02:06 0:30:00 0:23:47 0:06:13 smithi master rhel 8.1 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} d-balancer/crush-compat msgr-failures/osd-delay msgr/async objectstore/filestore-xfs rados supported-random-distro$/{rhel_8} thrashers/mapgap thrashosds-health workloads/small-objects-localized} 2
fail 5232805 2020-07-17 01:16:49 2020-07-17 05:32:07 2020-07-17 05:56:07 0:24:00 0:13:29 0:10:31 smithi master centos 7.6 rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/octopus backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{centos_7.6} msgr-failures/fastclose rados thrashers/morepggrow thrashosds-health workloads/cache-snaps} 3
Failure Reason:

Command failed on smithi180 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:2aed1595b49f256bf6c2a2a510fa4ee002ed6cd0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 48f7aec0-c7f1-11ea-a06f-001a4aab830c -- ceph orch daemon add osd smithi180:vg_nvme/lv_4'

fail 5232806 2020-07-17 01:16:50 2020-07-17 05:32:10 2020-07-17 05:44:09 0:11:59 0:06:16 0:05:43 smithi master centos 8.1 rados/cephadm/smoke-roleless/{distro/centos_latest start} 2
Failure Reason:

Command failed on smithi017 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:2aed1595b49f256bf6c2a2a510fa4ee002ed6cd0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f28b955c-c7ef-11ea-a06f-001a4aab830c -- ceph orch daemon add osd smithi017:vg_nvme/lv_4'

fail 5232807 2020-07-17 01:16:51 2020-07-17 05:32:11 2020-07-17 06:12:12 0:40:01 0:34:19 0:05:42 smithi master rhel 8.1 rados/singleton/{all/rebuild-mondb msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{rhel_8}} 1
Failure Reason:

timed out waiting for admin_socket to appear after osd.0 restart

pass 5232808 2020-07-17 01:16:52 2020-07-17 05:32:12 2020-07-17 05:54:12 0:22:00 0:15:42 0:06:18 smithi master ubuntu 18.04 rados/singleton-nomsgr/{all/msgr rados supported-random-distro$/{ubuntu_latest}} 1
pass 5232809 2020-07-17 01:16:53 2020-07-17 05:32:16 2020-07-17 05:54:16 0:22:00 0:14:57 0:07:03 smithi master centos 8.1 rados/basic/{ceph clusters/{fixed-2 openstack} msgr-failures/few msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8} tasks/repair_test} 2
fail 5232810 2020-07-17 01:16:54 2020-07-17 05:33:59 2020-07-17 05:51:58 0:17:59 0:12:07 0:05:52 smithi master ubuntu 18.04 rados/cephadm/with-work/{distro/ubuntu_18.04_podman fixed-2 mode/root msgr/async start tasks/rados_api_tests} 2
Failure Reason:

Command failed on smithi005 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:2aed1595b49f256bf6c2a2a510fa4ee002ed6cd0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c2f506b0-c7f0-11ea-a06f-001a4aab830c -- ceph orch daemon add osd smithi005:vg_nvme/lv_4'

pass 5232811 2020-07-17 01:16:56 2020-07-17 05:33:59 2020-07-17 05:59:59 0:26:00 0:18:41 0:07:19 smithi master ubuntu 18.04 rados/perf/{ceph objectstore/bluestore-stupid openstack settings/optimized ubuntu_latest workloads/cosbench_64K_write} 1
pass 5232812 2020-07-17 01:16:57 2020-07-17 05:34:07 2020-07-17 06:02:07 0:28:00 0:20:59 0:07:01 smithi master centos 8.1 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-partial-recovery} backoff/normal ceph clusters/{fixed-2 openstack} d-balancer/on msgr-failures/fastclose msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{centos_8} thrashers/morepggrow thrashosds-health workloads/small-objects} 2
pass 5232813 2020-07-17 01:16:58 2020-07-17 05:34:11 2020-07-17 06:12:11 0:38:00 0:30:40 0:07:20 smithi master rhel 8.1 rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/fast msgr-failures/fastclose rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/ec-snaps-few-objects-overwrites} 2
pass 5232814 2020-07-17 01:16:59 2020-07-17 05:34:18 2020-07-17 05:44:17 0:09:59 0:03:50 0:06:09 smithi master centos 8.1 rados/cephadm/workunits/{distro/centos_latest task/test_cephadm_repos} 1
fail 5232815 2020-07-17 01:17:00 2020-07-17 05:34:23 2020-07-17 06:02:23 0:28:00 0:21:04 0:06:56 smithi master ubuntu 18.04 rados/dashboard/{clusters/{2-node-mgr} debug/mgr objectstore/bluestore-low-osd-mem-target supported-random-distro$/{ubuntu_latest} tasks/dashboard} 2
Failure Reason:

Test failure: test_get_export (tasks.mgr.dashboard.test_ganesha.GaneshaTest)

pass 5232816 2020-07-17 01:17:01 2020-07-17 05:36:28 2020-07-17 05:58:27 0:21:59 0:16:47 0:05:12 smithi master rhel 8.1 rados/objectstore/{backends/keyvaluedb supported-random-distro$/{rhel_8}} 1
pass 5232817 2020-07-17 01:17:03 2020-07-17 05:36:28 2020-07-17 05:58:27 0:21:59 0:15:59 0:06:00 smithi master rhel 8.1 rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} msgr-failures/few objectstore/bluestore-bitmap rados recovery-overrides/{more-async-recovery} supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4
pass 5232818 2020-07-17 01:17:04 2020-07-17 05:36:29 2020-07-17 06:06:29 0:30:00 0:23:36 0:06:24 smithi master centos 8.1 rados/valgrind-leaks/{1-start 2-inject-leak/osd centos_latest} 1
pass 5232819 2020-07-17 01:17:05 2020-07-17 05:36:28 2020-07-17 06:00:28 0:24:00 0:16:32 0:07:28 smithi master ubuntu 18.04 rados/singleton/{all/recovery-preemption msgr-failures/many msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{ubuntu_latest}} 1
pass 5232820 2020-07-17 01:17:06 2020-07-17 05:36:28 2020-07-17 05:52:27 0:15:59 0:09:16 0:06:43 smithi master centos 8.1 rados/multimon/{clusters/3 msgr-failures/many msgr/async no_pools objectstore/filestore-xfs rados supported-random-distro$/{centos_8} tasks/mon_recovery} 2
pass 5232821 2020-07-17 01:17:07 2020-07-17 05:36:28 2020-07-17 05:54:27 0:17:59 0:10:11 0:07:48 smithi master ubuntu 18.04 rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} msgr-failures/osd-delay objectstore/filestore-xfs rados recovery-overrides/{default} supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=lrc-k=4-m=2-l=3} 3
pass 5232822 2020-07-17 01:17:09 2020-07-17 05:36:29 2020-07-17 06:12:29 0:36:00 0:28:13 0:07:47 smithi master rhel 8.1 rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} msgr-failures/osd-delay objectstore/filestore-xfs rados recovery-overrides/{more-async-recovery} supported-random-distro$/{rhel_8} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} 2
pass 5232823 2020-07-17 01:17:10 2020-07-17 05:36:28 2020-07-17 06:34:29 0:58:01 0:50:59 0:07:02 smithi master centos 8.1 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none msgr-failures/few msgr/async objectstore/filestore-xfs rados tasks/rados_api_tests validater/valgrind} 2
fail 5232824 2020-07-17 01:17:11 2020-07-17 05:36:28 2020-07-17 05:54:27 0:17:59 0:10:56 0:07:03 smithi master rhel 8.0 rados/cephadm/smoke/{distro/rhel_8.0 fixed-2 start} 2
Failure Reason:

Command failed on smithi057 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:2aed1595b49f256bf6c2a2a510fa4ee002ed6cd0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 2be582da-c7f1-11ea-a06f-001a4aab830c -- ceph orch daemon add osd smithi057:vg_nvme/lv_4'

pass 5232825 2020-07-17 01:17:13 2020-07-17 05:38:14 2020-07-17 06:16:14 0:38:00 0:30:43 0:07:17 smithi master rhel 8.1 rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal msgr-failures/osd-delay objectstore/filestore-xfs rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{rhel_8} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=jerasure-k=2-m=1} 2
pass 5232826 2020-07-17 01:17:14 2020-07-17 05:38:14 2020-07-17 06:26:14 0:48:00 0:41:12 0:06:48 smithi master centos 8.1 rados/monthrash/{ceph clusters/9-mons msgr-failures/mon-delay msgr/async objectstore/filestore-xfs rados supported-random-distro$/{centos_8} thrashers/one workloads/snaps-few-objects} 2
pass 5232827 2020-07-17 01:17:15 2020-07-17 05:38:14 2020-07-17 05:58:13 0:19:59 0:13:00 0:06:59 smithi master rhel 8.1 rados/singleton/{all/resolve_stuck_peering msgr-failures/few msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{rhel_8}} 2
fail 5232828 2020-07-17 01:17:16 2020-07-17 05:38:20 2020-07-17 05:56:19 0:17:59 0:10:35 0:07:24 smithi master rhel 8.0 rados/cephadm/smoke-roleless/{distro/rhel_8.0 start} 2
Failure Reason:

Command failed on smithi171 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:2aed1595b49f256bf6c2a2a510fa4ee002ed6cd0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 8ef82792-c7f1-11ea-a06f-001a4aab830c -- ceph orch daemon add osd smithi171:vg_nvme/lv_4'

pass 5232829 2020-07-17 01:17:18 2020-07-17 05:40:12 2020-07-17 06:14:12 0:34:00 0:26:53 0:07:07 smithi master rhel 8.1 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} backoff/peering ceph clusters/{fixed-2 openstack} d-balancer/crush-compat msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8} thrashers/none thrashosds-health workloads/snaps-few-objects-balanced} 2
pass 5232830 2020-07-17 01:17:19 2020-07-17 05:40:11 2020-07-17 06:08:11 0:28:00 0:21:44 0:06:16 smithi master ubuntu 18.04 rados/singleton-nomsgr/{all/multi-backfill-reject rados supported-random-distro$/{ubuntu_latest}} 2
pass 5232831 2020-07-17 01:17:20 2020-07-17 05:40:11 2020-07-17 05:58:11 0:18:00 0:12:05 0:05:55 smithi master ubuntu 18.04 rados/perf/{ceph objectstore/bluestore-basic-min-osd-mem-target openstack settings/optimized ubuntu_latest workloads/fio_4K_rand_read} 1
pass 5232832 2020-07-17 01:17:21 2020-07-17 05:40:14 2020-07-17 06:02:13 0:21:59 0:14:33 0:07:26 smithi master ubuntu 18.04 rados/mgr/{clusters/{2-node-mgr} debug/mgr objectstore/bluestore-bitmap supported-random-distro$/{ubuntu_latest} tasks/progress} 2
pass 5232833 2020-07-17 01:17:23 2020-07-17 05:42:23 2020-07-17 08:46:27 3:04:04 2:58:34 0:05:30 smithi master centos 8.1 rados/standalone/{supported-random-distro$/{centos_8} workloads/osd} 1
fail 5232834 2020-07-17 01:17:24 2020-07-17 05:42:23 2020-07-17 06:00:22 0:17:59 0:10:54 0:07:05 smithi master centos 8.0 rados/cephadm/with-work/{distro/centos_8.0 fixed-2 mode/packaged msgr/async start tasks/rados_python} 2
Failure Reason:

Command failed on smithi017 with status 22: 'sudo cephadm --image quay.ceph.io/ceph-ci/ceph:2aed1595b49f256bf6c2a2a510fa4ee002ed6cd0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 153d7cbc-c7f2-11ea-a06f-001a4aab830c -- ceph orch daemon add osd smithi017:vg_nvme/lv_4'

pass 5232835 2020-07-17 01:17:25 2020-07-17 05:42:23 2020-07-17 06:02:22 0:19:59 0:13:30 0:06:29 smithi master rhel 8.1 rados/singleton/{all/test-crash msgr-failures/many msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{rhel_8}} 1
fail 5232836 2020-07-17 01:17:26 2020-07-17 05:42:32 2020-07-17 05:58:32 0:16:00 0:08:33 0:07:27 smithi master ubuntu 18.04 rados/cephadm/workunits/{distro/ubuntu_18.04_podman task/test_orch_cli} 1
Failure Reason:

Command failed on smithi055 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:2aed1595b49f256bf6c2a2a510fa4ee002ed6cd0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid b9eefd86-c7f1-11ea-a06f-001a4aab830c -- ceph orch daemon add osd smithi055:vg_nvme/lv_4'

pass 5232837 2020-07-17 01:17:27 2020-07-17 05:42:54 2020-07-17 06:00:53 0:17:59 0:10:24 0:07:35 smithi master ubuntu 18.04 rados/basic/{ceph clusters/{fixed-2 openstack} msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{ubuntu_latest} tasks/scrub_test} 2
fail 5232838 2020-07-17 01:17:29 2020-07-17 05:44:30 2020-07-17 06:26:30 0:42:00 0:30:49 0:11:11 smithi master centos 7.6 rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/luminous-v1only backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{centos_7.6} msgr-failures/few rados thrashers/none thrashosds-health workloads/radosbench} 3
Failure Reason:

reached maximum tries (180) after waiting for 180 seconds

pass 5232839 2020-07-17 01:17:30 2020-07-17 05:44:31 2020-07-17 06:18:30 0:33:59 0:27:25 0:06:34 smithi master centos 8.1 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-active-recovery} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} d-balancer/on msgr-failures/osd-delay msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8} thrashers/pggrow thrashosds-health workloads/snaps-few-objects-localized} 2
fail 5232840 2020-07-17 01:17:31 2020-07-17 05:44:55 2020-07-17 06:06:55 0:22:00 0:15:09 0:06:51 smithi master rhel 8.1 rados/cephadm/smoke/{distro/rhel_latest fixed-2 start} 2
Failure Reason:

Command failed on smithi144 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:2aed1595b49f256bf6c2a2a510fa4ee002ed6cd0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 265bc422-c7f2-11ea-a06f-001a4aab830c -- ceph orch daemon add osd smithi144:vg_nvme/lv_4'

pass 5232841 2020-07-17 01:17:32 2020-07-17 05:46:03 2020-07-17 06:12:03 0:26:00 0:20:06 0:05:54 smithi master rhel 8.1 rados/singleton/{all/test_envlibrados_for_rocksdb msgr-failures/few msgr/async objectstore/filestore-xfs rados supported-random-distro$/{rhel_8}} 1
pass 5232842 2020-07-17 01:17:33 2020-07-17 05:46:04 2020-07-17 06:04:03 0:17:59 0:10:41 0:07:18 smithi master centos 8.1 rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} msgr-failures/osd-delay objectstore/bluestore-comp-lz4 rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4
pass 5232843 2020-07-17 01:17:35 2020-07-17 05:46:03 2020-07-17 06:04:03 0:18:00 0:11:57 0:06:03 smithi master ubuntu 18.04 rados/perf/{ceph objectstore/bluestore-bitmap openstack settings/optimized ubuntu_latest workloads/fio_4K_rand_rw} 1
fail 5232844 2020-07-17 01:17:36 2020-07-17 05:46:09 2020-07-17 06:02:08 0:15:59 0:09:35 0:06:24 smithi master rhel 8.1 rados/cephadm/smoke-roleless/{distro/rhel_latest start} 2
Failure Reason:

Command failed on smithi138 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:2aed1595b49f256bf6c2a2a510fa4ee002ed6cd0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 705d32fe-c7f2-11ea-a06f-001a4aab830c -- ceph orch daemon add osd smithi138:vg_nvme/lv_4'

pass 5232845 2020-07-17 01:17:37 2020-07-17 05:46:13 2020-07-17 06:10:12 0:23:59 0:17:01 0:06:58 smithi master centos 8.1 rados/singleton-nomsgr/{all/osd_stale_reads rados supported-random-distro$/{centos_8}} 1
pass 5232846 2020-07-17 01:17:39 2020-07-17 05:48:27 2020-07-17 06:30:27 0:42:00 0:36:14 0:05:46 smithi master rhel 8.1 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-partial-recovery} backoff/normal ceph clusters/{fixed-2 openstack} d-balancer/crush-compat msgr-failures/fastclose msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/snaps-few-objects} 2
pass 5232847 2020-07-17 01:17:40 2020-07-17 05:48:28 2020-07-17 06:02:26 0:13:58 0:07:06 0:06:52 smithi master ubuntu 18.04 rados/multimon/{clusters/6 msgr-failures/few msgr/async-v1only no_pools objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_latest} tasks/mon_clock_no_skews} 2
pass 5232848 2020-07-17 01:17:41 2020-07-17 05:48:27 2020-07-17 06:26:27 0:38:00 0:31:19 0:06:41 smithi master rhel 8.1 rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} msgr-failures/fastclose objectstore/bluestore-bitmap rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{rhel_8} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} 3
pass 5232849 2020-07-17 01:17:42 2020-07-17 05:48:27 2020-07-17 06:26:27 0:38:00 0:30:04 0:07:56 smithi master rhel 8.1 rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} msgr-failures/fastclose objectstore/bluestore-bitmap rados recovery-overrides/{more-async-recovery} supported-random-distro$/{rhel_8} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} 2
pass 5232850 2020-07-17 01:17:44 2020-07-17 05:49:57 2020-07-17 06:17:57 0:28:00 0:22:17 0:05:43 smithi master centos 8.1 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} msgr-failures/few msgr/async-v1only objectstore/bluestore-bitmap rados tasks/rados_api_tests validater/lockdep} 2
pass 5232851 2020-07-17 01:17:45 2020-07-17 05:49:58 2020-07-17 06:25:58 0:36:00 0:30:15 0:05:45 smithi master ubuntu 18.04 rados/singleton-bluestore/{all/cephtool msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest}} 1
fail 5232852 2020-07-17 01:17:46 2020-07-17 05:49:58 2020-07-17 06:15:57 0:25:59 0:20:17 0:05:42 smithi master centos 8.1 rados/dashboard/{clusters/{2-node-mgr} debug/mgr objectstore/bluestore-stupid supported-random-distro$/{centos_8} tasks/dashboard} 2
Failure Reason:

Test failure: test_get_export (tasks.mgr.dashboard.test_ganesha.GaneshaTest)

pass 5232853 2020-07-17 01:17:48 2020-07-17 05:49:57 2020-07-17 06:25:57 0:36:00 0:29:32 0:06:28 smithi master ubuntu 18.04 rados/objectstore/{backends/objectcacher-stress supported-random-distro$/{ubuntu_latest}} 1
fail 5232854 2020-07-17 01:17:49 2020-07-17 05:49:58 2020-07-17 06:05:57 0:15:59 0:10:07 0:05:52 smithi master centos 8.1 rados/cephadm/with-work/{distro/centos_latest fixed-2 mode/root msgr/async-v1only start tasks/rados_api_tests} 2
Failure Reason:

Command failed on smithi137 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:2aed1595b49f256bf6c2a2a510fa4ee002ed6cd0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid cd9acdd2-c7f2-11ea-a06f-001a4aab830c -- ceph orch daemon add osd smithi137:vg_nvme/lv_4'

pass 5232855 2020-07-17 01:17:50 2020-07-17 05:49:58 2020-07-17 06:21:57 0:31:59 0:24:39 0:07:20 smithi master ubuntu 18.04 rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast msgr-failures/fastclose objectstore/bluestore-bitmap rados recovery-overrides/{default} supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/ec-rados-plugin=jerasure-k=3-m=1} 2
pass 5232856 2020-07-17 01:17:51 2020-07-17 05:49:58 2020-07-17 07:30:00 1:40:02 1:34:22 0:05:40 smithi master centos 8.1 rados/singleton/{all/thrash-backfill-full msgr-failures/many msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{centos_8}} 2
pass 5232857 2020-07-17 01:17:53 2020-07-17 05:49:58 2020-07-17 06:25:57 0:35:59 0:30:35 0:05:24 smithi master rhel 8.1 rados/monthrash/{ceph clusters/3-mons msgr-failures/few msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{rhel_8} thrashers/sync-many workloads/snaps-few-objects} 2
pass 5232858 2020-07-17 01:17:54 2020-07-17 05:49:58 2020-07-17 06:05:58 0:16:00 0:09:24 0:06:36 smithi master ubuntu 18.04 rados/cephadm/workunits/{distro/ubuntu_18.04_podman task/test_adoption} 1
pass 5232859 2020-07-17 01:17:55 2020-07-17 05:50:00 2020-07-17 06:30:01 0:40:01 0:33:13 0:06:48 smithi master rhel 8.1 rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/normal msgr-failures/few rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{rhel_8} thrashers/default thrashosds-health workloads/ec-pool-snaps-few-objects-overwrites} 2
pass 5232860 2020-07-17 01:17:56 2020-07-17 05:50:02 2020-07-17 06:08:02 0:18:00 0:11:35 0:06:25 smithi master ubuntu 18.04 rados/mgr/{clusters/{2-node-mgr} debug/mgr objectstore/bluestore-comp-lz4 supported-random-distro$/{ubuntu_latest} tasks/prometheus} 2
pass 5232861 2020-07-17 01:17:57 2020-07-17 05:50:02 2020-07-17 06:34:03 0:44:01 0:37:22 0:06:39 smithi master ubuntu 18.04 rados/singleton/{all/thrash-eio msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{ubuntu_latest}} 2
fail 5232862 2020-07-17 01:17:59 2020-07-17 05:50:07 2020-07-17 06:04:06 0:13:59 0:06:58 0:07:01 smithi master ubuntu 18.04 rados/cephadm/smoke/{distro/ubuntu_18.04 fixed-2 start} 2
Failure Reason:

Command failed on smithi151 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:2aed1595b49f256bf6c2a2a510fa4ee002ed6cd0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 9affb5fe-c7f2-11ea-a06f-001a4aab830c -- ceph orch daemon add osd smithi151:vg_nvme/lv_4'

pass 5232863 2020-07-17 01:18:00 2020-07-17 05:50:07 2020-07-17 06:10:07 0:20:00 0:14:36 0:05:24 smithi master centos 8.1 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-active-recovery} backoff/peering ceph clusters/{fixed-2 openstack} d-balancer/on msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/write_fadvise_dontneed} 2
pass 5232864 2020-07-17 01:18:01 2020-07-17 05:50:08 2020-07-17 06:10:07 0:19:59 0:12:29 0:07:30 smithi master ubuntu 18.04 rados/perf/{ceph objectstore/bluestore-comp openstack settings/optimized ubuntu_latest workloads/fio_4M_rand_read} 1
pass 5232865 2020-07-17 01:18:02 2020-07-17 05:50:08 2020-07-17 06:22:08 0:32:00 0:24:01 0:07:59 smithi master ubuntu 18.04 rados/basic/{ceph clusters/{fixed-2 openstack} msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{ubuntu_latest} tasks/rados_api_tests} 2
pass 5232866 2020-07-17 01:18:03 2020-07-17 05:50:09 2020-07-17 06:10:08 0:19:59 0:12:32 0:07:27 smithi master rhel 8.1 rados/singleton-nomsgr/{all/pool-access rados supported-random-distro$/{rhel_8}} 1
fail 5232867 2020-07-17 01:18:05 2020-07-17 05:50:09 2020-07-17 06:04:08 0:13:59 0:06:41 0:07:18 smithi master ubuntu 18.04 rados/cephadm/smoke-roleless/{distro/ubuntu_18.04 start} 2
Failure Reason:

Command failed on smithi149 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:2aed1595b49f256bf6c2a2a510fa4ee002ed6cd0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid b6668796-c7f2-11ea-a06f-001a4aab830c -- ceph orch daemon add osd smithi149:vg_nvme/lv_4'

pass 5232868 2020-07-17 01:18:06 2020-07-17 05:50:09 2020-07-17 06:18:09 0:28:00 0:19:30 0:08:30 smithi master ubuntu 18.04 rados/singleton/{all/thrash-rados/{thrash-rados thrashosds-health} msgr-failures/many msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest}} 2
fail 5232869 2020-07-17 01:18:07 2020-07-17 05:50:09 2020-07-17 06:14:09 0:24:00 0:16:03 0:07:57 smithi master rhel 8.0 rados/cephadm/with-work/{distro/rhel_8.0 fixed-2 mode/packaged msgr/async-v2only start tasks/rados_python} 2
Failure Reason:

Command failed on smithi075 with status 22: 'sudo cephadm --image quay.ceph.io/ceph-ci/ceph:2aed1595b49f256bf6c2a2a510fa4ee002ed6cd0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid aad54f6a-c7f3-11ea-a06f-001a4aab830c -- ceph orch daemon add osd smithi075:vg_nvme/lv_4'

fail 5232870 2020-07-17 01:18:09 2020-07-17 05:50:10 2020-07-17 06:34:10 0:44:00 0:32:29 0:11:31 smithi master centos 7.6 rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/luminous backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{centos_7.6} msgr-failures/osd-delay rados thrashers/pggrow thrashosds-health workloads/rbd_cls} 3
Failure Reason:

reached maximum tries (180) after waiting for 180 seconds

pass 5232871 2020-07-17 01:18:10 2020-07-17 05:50:11 2020-07-17 06:18:11 0:28:00 0:21:29 0:06:31 smithi master centos 8.1 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} d-balancer/crush-compat msgr-failures/osd-delay msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{centos_8} thrashers/mapgap thrashosds-health workloads/admin_socket_objecter_requests} 2
pass 5232872 2020-07-17 01:18:11 2020-07-17 05:50:11 2020-07-17 06:10:11 0:20:00 0:11:58 0:08:02 smithi master ubuntu 18.04 rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} msgr-failures/fastclose objectstore/bluestore-comp-snappy rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4
pass 5232873 2020-07-17 01:18:12 2020-07-17 05:50:12 2020-07-17 07:18:14 1:28:02 1:22:22 0:05:40 smithi master rhel 8.1 rados/standalone/{supported-random-distro$/{rhel_8} workloads/scrub} 1
pass 5232874 2020-07-17 01:18:13 2020-07-17 05:50:13 2020-07-17 06:10:12 0:19:59 0:12:47 0:07:12 smithi master centos 8.1 rados/cephadm/workunits/{distro/centos_latest task/test_cephadm} 1
pass 5232875 2020-07-17 01:18:15 2020-07-17 05:50:13 2020-07-17 06:08:13 0:18:00 0:10:55 0:07:05 smithi master ubuntu 18.04 rados/perf/{ceph objectstore/bluestore-low-osd-mem-target openstack settings/optimized ubuntu_latest workloads/fio_4M_rand_rw} 1
pass 5232876 2020-07-17 01:18:16 2020-07-17 05:50:13 2020-07-17 06:24:13 0:34:00 0:27:34 0:06:26 smithi master ubuntu 18.04 rados/singleton/{all/thrash_cache_writeback_proxy_none msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{ubuntu_latest}} 2
fail 5232877 2020-07-17 01:18:17 2020-07-17 05:50:19 2020-07-17 06:08:18 0:17:59 0:08:59 0:09:00 smithi master ubuntu 18.04 rados/cephadm/smoke/{distro/ubuntu_18.04_podman fixed-2 start} 2
Failure Reason:

Command failed on smithi139 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:2aed1595b49f256bf6c2a2a510fa4ee002ed6cd0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 0e46e924-c7f3-11ea-a06f-001a4aab830c -- ceph orch daemon add osd smithi139:vg_nvme/lv_4'

pass 5232878 2020-07-17 01:18:18 2020-07-17 05:50:19 2020-07-17 06:08:18 0:17:59 0:07:10 0:10:49 smithi master ubuntu 18.04 rados/multimon/{clusters/9 msgr-failures/many msgr/async-v2only no_pools objectstore/bluestore-comp-lz4 rados supported-random-distro$/{ubuntu_latest} tasks/mon_clock_with_skews} 3
pass 5232879 2020-07-17 01:18:19 2020-07-17 05:51:37 2020-07-17 06:09:36 0:17:59 0:11:17 0:06:42 smithi master ubuntu 18.04 rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} msgr-failures/few objectstore/bluestore-comp-lz4 rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/ec-rados-plugin=lrc-k=4-m=2-l=3} 3
pass 5232880 2020-07-17 01:18:20 2020-07-17 05:51:37 2020-07-17 06:27:37 0:36:00 0:28:26 0:07:34 smithi master ubuntu 18.04 rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} msgr-failures/few objectstore/bluestore-comp-lz4 rados recovery-overrides/{default} supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} 2
pass 5232881 2020-07-17 01:18:22 2020-07-17 05:52:00 2020-07-17 08:10:03 2:18:03 2:10:56 0:07:07 smithi master centos 8.1 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-lz4 rados tasks/rados_cls_all validater/valgrind} 2
pass 5232882 2020-07-17 01:18:23 2020-07-17 05:52:29 2020-07-17 07:34:31 1:42:02 1:35:01 0:07:01 smithi master centos 8.1 rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal msgr-failures/few objectstore/bluestore-comp-lz4 rados recovery-overrides/{default} supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/ec-radosbench} 2
pass 5232883 2020-07-17 01:18:24 2020-07-17 05:52:32 2020-07-17 06:26:32 0:34:00 0:27:18 0:06:42 smithi master centos 8.1 rados/singleton-nomsgr/{all/recovery-unfound-found rados supported-random-distro$/{centos_8}} 1
dead 5232884 2020-07-17 01:18:25 2020-07-17 05:54:29 2020-07-17 06:06:28 0:11:59 0:05:26 0:06:33 smithi master rhel 8.1 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-recovery} backoff/normal ceph clusters/{fixed-2 openstack} d-balancer/on msgr-failures/fastclose msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{rhel_8} thrashers/morepggrow thrashosds-health workloads/cache-agent-big} 2
Failure Reason:

{'smithi104.front.sepia.ceph.com': {'attempts': 12, 'censored': "the output has been hidden due to the fact that 'no_log: true' was specified for this result", 'changed': True}}

dead 5232885 2020-07-17 01:18:27 2020-07-17 05:54:29 2020-07-17 06:08:28 0:13:59 0:05:04 0:08:55 smithi master rhel 8.1 rados/monthrash/{ceph clusters/9-mons msgr-failures/mon-delay msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8} thrashers/sync workloads/pool-create-delete} 2
Failure Reason:

{'smithi065.front.sepia.ceph.com': {'attempts': 12, 'censored': "the output has been hidden due to the fact that 'no_log: true' was specified for this result", 'changed': True}, 'smithi028.front.sepia.ceph.com': {'attempts': 12, 'censored': "the output has been hidden due to the fact that 'no_log: true' was specified for this result", 'changed': True}}

fail 5232886 2020-07-17 01:18:28 2020-07-17 05:54:30 2020-07-17 06:24:29 0:29:59 0:19:21 0:10:38 smithi master ubuntu 18.04 rados/dashboard/{clusters/{2-node-mgr} debug/mgr objectstore/filestore-xfs supported-random-distro$/{ubuntu_latest} tasks/dashboard} 2
Failure Reason:

Test failure: test_get_export (tasks.mgr.dashboard.test_ganesha.GaneshaTest)

fail 5232887 2020-07-17 01:18:29 2020-07-17 05:54:29 2020-07-17 07:24:30 1:30:01 1:06:44 0:23:17 smithi master rhel 8.1 rados/objectstore/{backends/objectstore supported-random-distro$/{rhel_8}} 1
Failure Reason:

Command crashed: 'sudo TESTDIR=/home/ubuntu/cephtest bash -c \'mkdir $TESTDIR/archive/ostest && cd $TESTDIR/archive/ostest && ulimit -Sn 16384 && CEPH_ARGS="--no-log-to-stderr --log-file $TESTDIR/archive/ceph_test_objectstore.log --debug-filestore 20 --debug-bluestore 20" ceph_test_objectstore --gtest_filter=-*/3 --gtest_catch_exceptions=0\''

fail 5232888 2020-07-17 01:18:31 2020-07-17 05:54:29 2020-07-17 06:12:28 0:17:59 0:08:00 0:09:59 smithi master ubuntu 18.04 rados/cephadm/smoke-roleless/{distro/ubuntu_18.04_podman start} 2
Failure Reason:

Command failed on smithi055 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:2aed1595b49f256bf6c2a2a510fa4ee002ed6cd0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 9d24497a-c7f3-11ea-a06f-001a4aab830c -- ceph orch daemon add osd smithi055:vg_nvme/lv_4'

pass 5232889 2020-07-17 01:18:32 2020-07-17 05:54:29 2020-07-17 06:12:28 0:17:59 0:11:41 0:06:18 smithi master rhel 8.1 rados/singleton/{all/watch-notify-same-primary msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{rhel_8}} 1
pass 5232890 2020-07-17 01:18:33 2020-07-17 05:57:35 2020-07-17 06:25:28 0:27:53 0:20:17 0:07:36 smithi master rhel 8.1 rados/basic/{ceph clusters/{fixed-2 openstack} msgr-failures/many msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{rhel_8} tasks/rados_cls_all} 2
pass 5232891 2020-07-17 01:18:35 2020-07-17 05:57:31 2020-07-17 06:13:28 0:15:57 0:07:52 0:08:05 smithi master ubuntu 18.04 rados/mgr/{clusters/{2-node-mgr} debug/mgr objectstore/bluestore-comp-snappy supported-random-distro$/{ubuntu_latest} tasks/workunits} 2
fail 5232892 2020-07-17 01:18:36 2020-07-17 05:58:14 2020-07-17 06:20:12 0:21:58 0:15:05 0:06:53 smithi master rhel 8.1 rados/cephadm/with-work/{distro/rhel_latest fixed-2 mode/root msgr/async start tasks/rados_api_tests} 2
Failure Reason:

Command failed on smithi073 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:2aed1595b49f256bf6c2a2a510fa4ee002ed6cd0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid ac393848-c7f4-11ea-a06f-001a4aab830c -- ceph orch daemon add osd smithi073:vg_nvme/lv_4'

pass 5232893 2020-07-17 01:18:37 2020-07-17 05:58:19 2020-07-17 06:16:15 0:17:56 0:10:32 0:07:24 smithi master ubuntu 18.04 rados/perf/{ceph objectstore/bluestore-stupid openstack settings/optimized ubuntu_latest workloads/fio_4M_rand_write} 1
pass 5232894 2020-07-17 01:18:39 2020-07-17 05:59:08 2020-07-17 06:17:07 0:17:59 0:10:28 0:07:31 smithi master centos 8.1 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} backoff/peering ceph clusters/{fixed-2 openstack} d-balancer/crush-compat msgr-failures/few msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{centos_8} thrashers/none thrashosds-health workloads/cache-agent-small} 2
pass 5232895 2020-07-17 01:18:40 2020-07-17 05:59:10 2020-07-17 06:09:06 0:09:56 0:03:52 0:06:04 smithi master ubuntu 18.04 rados/cephadm/workunits/{distro/ubuntu_18.04_podman task/test_cephadm_repos} 1
pass 5232896 2020-07-17 01:18:41 2020-07-17 05:59:10 2020-07-17 06:13:07 0:13:57 0:06:56 0:07:01 smithi master ubuntu 18.04 rados/singleton/{all/admin-socket msgr-failures/few msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{ubuntu_latest}} 1
fail 5232897 2020-07-17 01:18:42 2020-07-17 06:00:00 2020-07-17 06:11:59 0:11:59 0:05:55 0:06:04 smithi master ubuntu 18.04 rados/cephadm/smoke/{distro/ubuntu_latest fixed-2 start} 2
Failure Reason:

Command failed on smithi115 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:2aed1595b49f256bf6c2a2a510fa4ee002ed6cd0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid ddfb2dce-c7f3-11ea-a06f-001a4aab830c -- ceph orch daemon add osd smithi115:vg_nvme/lv_4'

pass 5232898 2020-07-17 01:18:44 2020-07-17 06:00:29 2020-07-17 06:18:28 0:17:59 0:12:51 0:05:08 smithi master rhel 8.1 rados/singleton-nomsgr/{all/version-number-sanity rados supported-random-distro$/{rhel_8}} 1
pass 5232899 2020-07-17 01:18:45 2020-07-17 06:00:29 2020-07-17 06:32:29 0:32:00 0:24:42 0:07:18 smithi master rhel 8.1 rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/fast msgr-failures/osd-delay rados recovery-overrides/{more-active-recovery} supported-random-distro$/{rhel_8} thrashers/fastread thrashosds-health workloads/ec-small-objects-fast-read-overwrites} 2
pass 5232900 2020-07-17 01:18:46 2020-07-17 06:00:29 2020-07-17 06:20:28 0:19:59 0:10:38 0:09:21 smithi master centos 8.1 rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} msgr-failures/few objectstore/bluestore-comp-zlib rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4
pass 5232901 2020-07-17 01:18:47 2020-07-17 06:00:30 2020-07-17 06:14:29 0:13:59 0:08:13 0:05:46 smithi master centos 8.1 rados/singleton/{all/deduptool msgr-failures/many msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_8}} 1
fail 5232902 2020-07-17 01:18:49 2020-07-17 06:00:55 2020-07-17 06:36:55 0:36:00 0:24:23 0:11:37 smithi master centos 7.6 rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/mimic-v1only backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{centos_7.6} msgr-failures/fastclose rados thrashers/careful thrashosds-health workloads/snaps-few-objects} 3
Failure Reason:

reached maximum tries (180) after waiting for 180 seconds

fail 5232903 2020-07-17 01:18:50 2020-07-17 06:02:22 2020-07-17 06:14:21 0:11:59 0:06:01 0:05:58 smithi master ubuntu 18.04 rados/cephadm/smoke-roleless/{distro/ubuntu_latest start} 2
Failure Reason:

Command failed on smithi184 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:2aed1595b49f256bf6c2a2a510fa4ee002ed6cd0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 2da42a74-c7f4-11ea-a06f-001a4aab830c -- ceph orch daemon add osd smithi184:vg_nvme/lv_4'

pass 5232904 2020-07-17 01:18:51 2020-07-17 06:02:22 2020-07-17 06:32:22 0:30:00 0:21:29 0:08:31 smithi master ubuntu 18.04 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-recovery} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} d-balancer/on msgr-failures/osd-delay msgr/async objectstore/filestore-xfs rados supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/cache-pool-snaps-readproxy} 2
fail 5232905 2020-07-17 01:18:52 2020-07-17 06:02:22 2020-07-17 06:16:21 0:13:59 0:08:49 0:05:10 smithi master centos 8.1 rados/cephadm/workunits/{distro/centos_latest task/test_orch_cli} 1
Failure Reason:

Command failed on smithi037 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:2aed1595b49f256bf6c2a2a510fa4ee002ed6cd0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 8b04928a-c7f4-11ea-a06f-001a4aab830c -- ceph orch daemon add osd smithi037:vg_nvme/lv_4'

pass 5232906 2020-07-17 01:18:54 2020-07-17 06:02:22 2020-07-17 06:22:22 0:20:00 0:12:40 0:07:20 smithi master ubuntu 18.04 rados/perf/{ceph objectstore/bluestore-basic-min-osd-mem-target openstack settings/optimized ubuntu_latest workloads/radosbench_4K_rand_read} 1