Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 4868778 2020-03-19 19:01:18 2020-03-19 19:01:35 2020-03-19 19:17:34 0:15:59 0:08:41 0:07:18 smithi master ubuntu 18.04 rados/singleton/{all/test-crash.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} 1
pass 4868779 2020-03-19 19:01:19 2020-03-19 19:01:36 2020-03-19 19:27:35 0:25:59 0:12:34 0:13:25 smithi master ubuntu 18.04 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-async-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/osd-delay.yaml msgr/async-v2only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/redirect_promote_tests.yaml} 2
fail 4868780 2020-03-19 19:01:19 2020-03-19 23:16:48 2020-03-19 23:54:48 0:38:00 0:26:56 0:11:04 smithi master centos 8.1 rados/dashboard/{clusters/{2-node-mgr.yaml} debug/mgr.yaml objectstore/bluestore-avl.yaml supported-random-distro$/{centos_8.yaml} tasks/dashboard.yaml} 2
Failure Reason:

Test failure: test_create_with_drive_group (tasks.mgr.dashboard.test_osd.OsdTest)

fail 4868781 2020-03-19 19:01:20 2020-03-19 23:16:50 2020-03-19 23:56:50 0:40:00 0:27:35 0:12:25 smithi master ubuntu 18.04 rados/dashboard/{clusters/{2-node-mgr.yaml} debug/mgr.yaml objectstore/bluestore-bitmap.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/dashboard.yaml} 2
Failure Reason:

Test failure: test_create_with_drive_group (tasks.mgr.dashboard.test_osd.OsdTest)

pass 4868782 2020-03-19 19:01:21 2020-03-19 19:01:35 2020-03-19 19:27:35 0:26:00 0:16:19 0:09:41 smithi master centos 8.1 rados/monthrash/{ceph.yaml clusters/3-mons.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{centos_8.yaml} thrashers/one.yaml workloads/pool-create-delete.yaml} 2
fail 4868783 2020-03-19 19:01:22 2020-03-19 23:17:13 2020-03-19 23:35:12 0:17:59 0:10:46 0:07:13 smithi master ubuntu 18.04 rados/cephadm/workunits/{distro/ubuntu_18.04_podman.yaml task/test_cephadm.yaml} 1
Failure Reason:

Command failed (workunit test cephadm/test_cephadm.sh) on smithi125 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=7ffcbba56021d111befdfbe45828c0a1854de1b6 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh'

pass 4868784 2020-03-19 19:01:23 2020-03-19 23:18:59 2020-03-19 23:52:58 0:33:59 0:27:29 0:06:30 smithi master rhel 8.1 rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/osd-delay.yaml objectstore/bluestore-comp-snappy.yaml rados.yaml recovery-overrides/{more-async-partial-recovery.yaml} supported-random-distro$/{rhel_8.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} 2
pass 4868785 2020-03-19 19:01:24 2020-03-19 23:18:59 2020-03-20 00:02:59 0:44:00 0:28:29 0:15:31 smithi master rhel 8.1 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-async-recovery.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-comp-lz4.yaml rados.yaml supported-random-distro$/{rhel_8.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/cache-pool-snaps-readproxy.yaml} 2
pass 4868786 2020-03-19 19:01:25 2020-03-19 23:18:59 2020-03-19 23:50:58 0:31:59 0:20:54 0:11:05 smithi master centos 8.1 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/osd-delay.yaml msgr/async-v2only.yaml objectstore/bluestore-comp-snappy.yaml rados.yaml supported-random-distro$/{centos_8.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/cache-pool-snaps.yaml} 2
pass 4868787 2020-03-19 19:01:26 2020-03-19 23:21:07 2020-03-19 23:53:06 0:31:59 0:18:38 0:13:21 smithi master centos 8.1 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/bluestore-comp-zlib.yaml rados.yaml supported-random-distro$/{centos_8.yaml} thrashers/none.yaml thrashosds-health.yaml workloads/cache-snaps-balanced.yaml} 2
pass 4868788 2020-03-19 19:01:26 2020-03-19 23:21:07 2020-03-19 23:43:06 0:21:59 0:09:15 0:12:44 smithi master centos 8.1 rados/monthrash/{ceph.yaml clusters/9-mons.yaml msgr-failures/mon-delay.yaml msgr/async.yaml objectstore/bluestore-avl.yaml rados.yaml supported-random-distro$/{centos_8.yaml} thrashers/sync-many.yaml workloads/rados_5925.yaml} 2
fail 4868789 2020-03-19 19:01:27 2020-03-19 23:21:07 2020-03-19 23:59:07 0:38:00 0:29:47 0:08:13 smithi master rhel 8.1 rados/dashboard/{clusters/{2-node-mgr.yaml} debug/mgr.yaml objectstore/bluestore-comp-lz4.yaml supported-random-distro$/{rhel_8.yaml} tasks/dashboard.yaml} 2
Failure Reason:

Test failure: test_create_with_drive_group (tasks.mgr.dashboard.test_osd.OsdTest)

fail 4868790 2020-03-19 19:01:28 2020-03-19 23:21:07 2020-03-20 00:25:07 1:04:00 0:47:52 0:16:08 smithi master centos 8.1 rados/verify/{centos_latest.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/none.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-comp-zstd.yaml rados.yaml tasks/rados_api_tests.yaml validater/valgrind.yaml} 2
Failure Reason:

"2020-03-19T23:55:58.404460+0000 mon.b (mon.0) 260 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log

fail 4868791 2020-03-19 19:01:29 2020-03-19 23:22:48 2020-03-20 00:04:48 0:42:00 0:27:03 0:14:57 smithi master centos 8.1 rados/dashboard/{clusters/{2-node-mgr.yaml} debug/mgr.yaml objectstore/bluestore-comp-snappy.yaml supported-random-distro$/{centos_8.yaml} tasks/dashboard.yaml} 2
Failure Reason:

Test failure: test_create_with_drive_group (tasks.mgr.dashboard.test_osd.OsdTest)

fail 4868792 2020-03-19 19:01:30 2020-03-19 23:22:48 2020-03-20 00:12:48 0:50:00 0:27:47 0:22:13 smithi master ubuntu 18.04 rados/dashboard/{clusters/{2-node-mgr.yaml} debug/mgr.yaml objectstore/bluestore-comp-zlib.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/dashboard.yaml} 2
Failure Reason:

Test failure: test_create_with_drive_group (tasks.mgr.dashboard.test_osd.OsdTest)

fail 4868793 2020-03-19 19:01:31 2020-03-19 23:22:48 2020-03-19 23:46:48 0:24:00 0:09:39 0:14:21 smithi master centos 8.1 rados/cephadm/workunits/{distro/centos_latest.yaml task/test_cephadm.yaml} 1
Failure Reason:

Command failed (workunit test cephadm/test_cephadm.sh) on smithi204 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=7ffcbba56021d111befdfbe45828c0a1854de1b6 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh'

pass 4868794 2020-03-19 19:01:32 2020-03-19 23:22:48 2020-03-20 00:02:48 0:40:00 0:21:46 0:18:14 smithi master rhel 8.1 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-async-partial-recovery.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-comp-zstd.yaml rados.yaml supported-random-distro$/{rhel_8.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/admin_socket_objecter_requests.yaml} 2
fail 4868795 2020-03-19 19:01:33 2020-03-19 23:22:48 2020-03-19 23:54:48 0:32:00 0:20:57 0:11:03 smithi master ubuntu 18.04 rados/monthrash/{ceph.yaml clusters/3-mons.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-comp-snappy.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/many.yaml workloads/rados_mon_workunits.yaml} 2
Failure Reason:

Command failed (workunit test mon/caps.sh) on smithi029 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=7ffcbba56021d111befdfbe45828c0a1854de1b6 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/mon/caps.sh'

fail 4868796 2020-03-19 19:01:33 2020-03-19 23:22:50 2020-03-20 00:04:49 0:41:59 0:27:29 0:14:30 smithi master centos 8.1 rados/dashboard/{clusters/{2-node-mgr.yaml} debug/mgr.yaml objectstore/bluestore-comp-zstd.yaml supported-random-distro$/{centos_8.yaml} tasks/dashboard.yaml} 2
Failure Reason:

Test failure: test_create_with_drive_group (tasks.mgr.dashboard.test_osd.OsdTest)

pass 4868797 2020-03-19 19:01:34 2020-03-19 23:22:51 2020-03-20 02:02:52 2:40:01 1:58:53 0:41:08 smithi master centos 7.6 rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-install/luminous.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/crush-compat.yaml distro$/{centos_7.6.yaml} msgr-failures/few.yaml rados.yaml thrashers/default.yaml thrashosds-health.yaml workloads/radosbench.yaml} 3
pass 4868798 2020-03-19 19:01:35 2020-03-19 23:23:10 2020-03-20 00:07:10 0:44:00 0:30:14 0:13:46 smithi master ubuntu 18.04 rados/cephadm/with-work/{distro/ubuntu_18.04_podman.yaml fixed-2.yaml mode/packaged.yaml msgr/async.yaml start.yaml tasks/rados_api_tests.yaml} 2
fail 4868799 2020-03-19 19:01:36 2020-03-19 23:25:18 2020-03-19 23:57:17 0:31:59 0:09:07 0:22:52 smithi master centos 7.6 rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-install/mimic-v1only.yaml backoff/normal.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/off.yaml distro$/{centos_7.6.yaml} msgr-failures/osd-delay.yaml rados.yaml thrashers/mapgap.yaml thrashosds-health.yaml workloads/rbd_cls.yaml} 3
Failure Reason:

Command failed on smithi200 with status 1: "sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph-ci/ceph:7ffcbba56021d111befdfbe45828c0a1854de1b6 bootstrap --fsid d79328a2-6a3c-11ea-9a4e-001a4aab830c --mon-id a --mgr-id y --orphan-initial-daemons --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-addrv '[v1:172.21.15.200:6789]' && sudo chmod +r /etc/ceph/ceph.client.admin.keyring"

fail 4868800 2020-03-19 19:01:37 2020-03-19 23:26:42 2020-03-20 06:02:48 6:36:06 6:24:35 0:11:31 smithi master centos 8.1 rados/verify/{centos_latest.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/none.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-avl.yaml rados.yaml tasks/rados_api_tests.yaml validater/valgrind.yaml} 2
Failure Reason:

Command failed (workunit test rados/test.sh) on smithi195 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=7ffcbba56021d111befdfbe45828c0a1854de1b6 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh'

fail 4868801 2020-03-19 19:01:38 2020-03-19 23:26:42 2020-03-20 00:04:41 0:37:59 0:30:04 0:07:55 smithi master rhel 8.1 rados/dashboard/{clusters/{2-node-mgr.yaml} debug/mgr.yaml objectstore/bluestore-low-osd-mem-target.yaml supported-random-distro$/{rhel_8.yaml} tasks/dashboard.yaml} 2
Failure Reason:

Test failure: test_create_with_drive_group (tasks.mgr.dashboard.test_osd.OsdTest)

pass 4868802 2020-03-19 19:01:39 2020-03-19 23:28:53 2020-03-20 00:00:52 0:31:59 0:21:12 0:10:47 smithi master centos 8.1 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-async-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/osd-delay.yaml msgr/async-v2only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{centos_8.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/small-objects.yaml} 2
fail 4868803 2020-03-19 19:01:40 2020-03-19 23:28:53 2020-03-20 00:08:52 0:39:59 0:30:50 0:09:09 smithi master rhel 8.1 rados/dashboard/{clusters/{2-node-mgr.yaml} debug/mgr.yaml objectstore/bluestore-stupid.yaml supported-random-distro$/{rhel_8.yaml} tasks/dashboard.yaml} 2
Failure Reason:

Test failure: test_create_with_drive_group (tasks.mgr.dashboard.test_osd.OsdTest)

fail 4868804 2020-03-19 19:01:40 2020-03-19 23:28:53 2020-03-19 23:46:52 0:17:59 0:10:30 0:07:29 smithi master ubuntu 18.04 rados/cephadm/workunits/{distro/ubuntu_latest.yaml task/test_cephadm.yaml} 1
Failure Reason:

Command failed (workunit test cephadm/test_cephadm.sh) on smithi085 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=7ffcbba56021d111befdfbe45828c0a1854de1b6 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh'

fail 4868805 2020-03-19 19:01:41 2020-03-19 23:28:55 2020-03-20 00:16:55 0:48:00 0:27:26 0:20:34 smithi master ubuntu 18.04 rados/dashboard/{clusters/{2-node-mgr.yaml} debug/mgr.yaml objectstore/filestore-xfs.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/dashboard.yaml} 2
Failure Reason:

Test failure: test_create_with_drive_group (tasks.mgr.dashboard.test_osd.OsdTest)