Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 5199479 2020-07-05 04:54:30 2020-07-05 04:55:42 2020-07-05 06:03:42 1:08:00 1:02:46 0:05:14 smithi master centos 8.1 rados/dashboard/{clusters/{2-node-mgr} debug/mgr objectstore/bluestore-hybrid supported-random-distro$/{centos_8} tasks/dashboard} 2
fail 5199480 2020-07-05 04:54:31 2020-07-05 04:55:43 2020-07-05 05:13:42 0:17:59 0:07:09 0:10:50 smithi master centos 8.1 rados/cephadm/smoke/{distro/centos_latest fixed-2 start} 2
Failure Reason:

Command failed on smithi122 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:bb6ae07384f1b19190ec56679b7ccbca78af1f19 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid a2533b7e-be7d-11ea-a06e-001a4aab830c -- ceph orch daemon add osd smithi122:vg_nvme/lv_4'

pass 5199481 2020-07-05 04:54:32 2020-07-05 04:55:44 2020-07-05 05:55:43 0:59:59 0:42:10 0:17:49 smithi master ubuntu 18.04 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-partial-recovery} backoff/normal ceph clusters/{fixed-2 openstack} d-balancer/on msgr-failures/osd-delay msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/radosbench-high-concurrency} 2
pass 5199482 2020-07-05 04:54:33 2020-07-05 04:55:42 2020-07-05 06:19:43 1:24:01 1:15:59 0:08:02 smithi master centos 8.1 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-partial-recovery} backoff/peering ceph clusters/{fixed-2 openstack} d-balancer/crush-compat msgr-failures/fastclose msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/radosbench} 2
fail 5199483 2020-07-05 04:54:34 2020-07-05 04:55:42 2020-07-05 06:07:43 1:12:01 1:05:19 0:06:42 smithi master rhel 8.1 rados/dashboard/{clusters/{2-node-mgr} debug/mgr objectstore/bluestore-low-osd-mem-target supported-random-distro$/{rhel_8} tasks/dashboard} 2
Failure Reason:

"2020-07-05T06:00:27.036189+0000 mon.a (mon.0) 6956 : cluster [WRN] Health check failed: Telemetry requires re-opt-in (TELEMETRY_CHANGED)" in cluster log

fail 5199484 2020-07-05 04:54:35 2020-07-05 04:55:44 2020-07-05 05:31:42 0:35:58 0:13:51 0:22:07 smithi master centos 7.6 rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/nautilus-v1only backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{centos_7.6} msgr-failures/few rados thrashers/none thrashosds-health workloads/test_rbd_api} 3
Failure Reason:

Command failed on smithi060 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:bb6ae07384f1b19190ec56679b7ccbca78af1f19 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d3d7a3fe-be7f-11ea-a06e-001a4aab830c -- ceph orch daemon add osd smithi060:vg_nvme/lv_4'

fail 5199485 2020-07-05 04:54:36 2020-07-05 04:55:42 2020-07-05 05:19:41 0:23:59 0:13:37 0:10:22 smithi master ubuntu 18.04 rados/singleton/{all/thrash_cache_writeback_proxy_none msgr-failures/few msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{ubuntu_latest}} 2
Failure Reason:

Command crashed: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 400000 --objects 10000 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 600 --op read 100 --op write 50 --op delete 50 --op copy_from 50 --op write_excl 50 --pool base'

pass 5199486 2020-07-05 04:54:37 2020-07-05 04:55:44 2020-07-05 05:33:42 0:37:58 0:26:14 0:11:44 smithi master rhel 8.1 rados/basic/{ceph clusters/{fixed-2 openstack} msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8} tasks/rados_api_tests} 2
pass 5199487 2020-07-05 04:54:38 2020-07-05 04:57:27 2020-07-05 06:15:28 1:18:01 1:06:35 0:11:26 smithi master rhel 8.1 rados/dashboard/{clusters/{2-node-mgr} debug/mgr objectstore/bluestore-stupid supported-random-distro$/{rhel_8} tasks/dashboard} 2
pass 5199488 2020-07-05 04:54:39 2020-07-05 04:57:27 2020-07-05 05:31:27 0:34:00 0:25:06 0:08:54 smithi master rhel 8.1 rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast msgr-failures/osd-delay objectstore/bluestore-low-osd-mem-target rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{rhel_8} thrashers/minsize_recovery thrashosds-health workloads/ec-small-objects-many-deletes} 2
pass 5199489 2020-07-05 04:54:40 2020-07-05 04:57:27 2020-07-05 06:11:28 1:14:01 1:01:30 0:12:31 smithi master ubuntu 18.04 rados/dashboard/{clusters/{2-node-mgr} debug/mgr objectstore/filestore-xfs supported-random-distro$/{ubuntu_latest} tasks/dashboard} 2
fail 5199490 2020-07-05 04:54:41 2020-07-05 04:57:29 2020-07-05 09:53:37 4:56:08 4:42:21 0:13:47 smithi master rhel 8.1 rados/objectstore/{backends/objectstore supported-random-distro$/{rhel_8}} 1
Failure Reason:

Command failed on smithi139 with status 1: 'sudo TESTDIR=/home/ubuntu/cephtest bash -c \'mkdir $TESTDIR/archive/ostest && cd $TESTDIR/archive/ostest && ulimit -Sn 16384 && CEPH_ARGS="--no-log-to-stderr --log-file $TESTDIR/archive/ceph_test_objectstore.log --debug-filestore 20 --debug-bluestore 20" ceph_test_objectstore --gtest_filter=-*/3 --gtest_catch_exceptions=0\''

pass 5199491 2020-07-05 04:54:42 2020-07-05 04:59:34 2020-07-05 06:13:35 1:14:01 1:03:11 0:10:50 smithi master ubuntu 18.04 rados/dashboard/{clusters/{2-node-mgr} debug/mgr objectstore/bluestore-bitmap supported-random-distro$/{ubuntu_latest} tasks/dashboard} 2
pass 5199492 2020-07-05 04:54:43 2020-07-05 04:59:34 2020-07-05 08:59:40 4:00:06 3:48:44 0:11:22 smithi master ubuntu 18.04 rados/upgrade/nautilus-x-singleton/{0-cluster/{openstack start} 1-install/nautilus 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{rbd-cls rbd-import-export readwrite snaps-few-objects} 5-workload/{radosbench rbd_api} 6-finish-upgrade 7-octopus 8-workload/{rbd-python snaps-many-objects} bluestore-bitmap thrashosds-health ubuntu_latest} 4
pass 5199493 2020-07-05 04:54:44 2020-07-05 05:01:11 2020-07-05 06:09:12 1:08:01 1:02:26 0:05:35 smithi master centos 8.1 rados/dashboard/{clusters/{2-node-mgr} debug/mgr objectstore/bluestore-comp-lz4 supported-random-distro$/{centos_8} tasks/dashboard} 2
fail 5199494 2020-07-05 04:54:45 2020-07-05 05:01:13 2020-07-05 05:21:13 0:20:00 0:07:01 0:12:59 smithi master centos 8.1 rados/cephadm/smoke/{distro/centos_latest fixed-2 start} 2
Failure Reason:

Command failed on smithi080 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:bb6ae07384f1b19190ec56679b7ccbca78af1f19 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 88f0431a-be7e-11ea-a06e-001a4aab830c -- ceph orch daemon add osd smithi080:vg_nvme/lv_4'

fail 5199495 2020-07-05 04:54:46 2020-07-05 05:01:14 2020-07-05 05:33:14 0:32:00 0:12:51 0:19:09 smithi master centos 7.6 rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/luminous backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{centos_7.6} msgr-failures/fastclose rados thrashers/morepggrow thrashosds-health workloads/test_rbd_api} 3
Failure Reason:

Command failed on smithi097 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:bb6ae07384f1b19190ec56679b7ccbca78af1f19 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 2e4134fe-be80-11ea-a06e-001a4aab830c -- ceph orch daemon add osd smithi097:vg_nvme/lv_4'

pass 5199496 2020-07-05 04:54:47 2020-07-05 05:01:14 2020-07-05 06:19:15 1:18:01 1:05:08 0:12:53 smithi master rhel 8.1 rados/dashboard/{clusters/{2-node-mgr} debug/mgr objectstore/bluestore-comp-snappy supported-random-distro$/{rhel_8} tasks/dashboard} 2
fail 5199497 2020-07-05 04:54:48 2020-07-05 05:01:15 2020-07-05 05:29:15 0:28:00 0:12:45 0:15:15 smithi master centos 7.6 rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/mimic-v1only backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{centos_7.6} msgr-failures/few rados thrashers/none thrashosds-health workloads/cache-snaps} 3
Failure Reason:

Command failed on smithi168 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:bb6ae07384f1b19190ec56679b7ccbca78af1f19 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid ab93080c-be7f-11ea-a06e-001a4aab830c -- ceph orch daemon add osd smithi168:vg_nvme/lv_4'

pass 5199498 2020-07-05 04:54:49 2020-07-05 05:01:28 2020-07-05 06:15:29 1:14:01 1:05:25 0:08:36 smithi master rhel 8.1 rados/dashboard/{clusters/{2-node-mgr} debug/mgr objectstore/bluestore-comp-zlib supported-random-distro$/{rhel_8} tasks/dashboard} 2
fail 5199499 2020-07-05 04:54:50 2020-07-05 05:01:30 2020-07-05 05:35:30 0:34:00 0:23:51 0:10:09 smithi master centos 7.6 rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/mimic backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{centos_7.6} msgr-failures/osd-delay rados thrashers/pggrow thrashosds-health workloads/radosbench} 3
Failure Reason:

reached maximum tries (180) after waiting for 180 seconds

pass 5199500 2020-07-05 04:54:51 2020-07-05 05:13:32 2020-07-05 06:23:33 1:10:01 1:04:46 0:05:15 smithi master rhel 8.1 rados/dashboard/{clusters/{2-node-mgr} debug/mgr objectstore/bluestore-comp-zstd supported-random-distro$/{rhel_8} tasks/dashboard} 2