Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 5865215 2021-02-07 16:28:13 2021-02-07 16:41:30 2021-02-07 17:00:32 0:19:02 0:09:06 0:09:56 smithi master centos 8.1 rados/mgr/{clusters/{2-node-mgr} debug/mgr objectstore/bluestore-hybrid supported-random-distro$/{centos_latest} tasks/progress} 2
Failure Reason:

Test failure: test_osd_came_back (tasks.mgr.test_progress.TestProgress)

fail 5865216 2021-02-07 16:28:14 2021-02-07 16:45:21 2021-02-07 17:10:52 0:25:31 0:16:41 0:08:50 smithi master centos 8.1 rados/cephadm/upgrade/{1-start 2-repo_digest/defaut 3-start-upgrade 4-wait distro$/{centos_latest} fixed-2} 2
Failure Reason:

reached maximum tries (180) after waiting for 180 seconds

fail 5865217 2021-02-07 16:28:15 2021-02-07 16:48:01 2021-02-07 17:11:24 0:23:23 0:11:53 0:11:30 smithi master ubuntu 18.04 rados/cephadm/workunits/{distro/ubuntu_18.04_podman task/test_cephadm} 1
Failure Reason:

Found coredumps on ubuntu@smithi110.front.sepia.ceph.com

fail 5865218 2021-02-07 16:28:16 2021-02-07 16:48:02 2021-02-07 17:13:32 0:25:30 0:11:25 0:14:05 smithi master ubuntu 18.04 rados/cephadm/workunits/{distro/ubuntu_18.04_podman task/test_orch_cli} 1
Failure Reason:

Test failure: test_cluster_info (tasks.cephfs.test_nfs.TestNFS)

fail 5865219 2021-02-07 16:28:16 2021-02-07 16:51:52 2021-02-07 17:07:14 0:15:22 0:06:39 0:08:43 smithi master ubuntu 18.04 rados/cephadm/workunits/{distro/ubuntu_18.04_podman task/test_adoption} 1
Failure Reason:

Found coredumps on ubuntu@smithi078.front.sepia.ceph.com

pass 5865220 2021-02-07 16:28:17 2021-02-07 16:51:52 2021-02-07 17:22:16 0:30:24 0:18:06 0:12:18 smithi master ubuntu 18.04 rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast msgr-failures/few objectstore/bluestore-bitmap rados recovery-overrides/{more-async-recovery} supported-random-distro$/{ubuntu_latest} thrashers/minsize_recovery thrashosds-health workloads/ec-small-objects-balanced} 2
fail 5865221 2021-02-07 16:28:18 2021-02-07 16:54:43 2021-02-07 17:19:33 0:24:50 0:12:17 0:12:33 smithi master rhel 8.2 rados/monthrash/{ceph clusters/3-mons msgr-failures/mon-delay msgr/async objectstore/bluestore-bitmap rados supported-random-distro$/{rhel_latest} thrashers/many workloads/rados_mon_workunits} 2
Failure Reason:

Command failed (workunit test mon/crush_ops.sh) on smithi204 with status 16: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=c17224787653e7677643e86f3808ede6ebbbab8f TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/mon/crush_ops.sh'

fail 5865222 2021-02-07 16:28:18 2021-02-07 17:00:34 2021-02-07 17:25:48 0:25:14 0:17:08 0:08:06 smithi master rhel 8.2 rados/cephadm/upgrade/{1-start 2-repo_digest/defaut 3-start-upgrade 4-wait distro$/{rhel_latest} fixed-2} 2
Failure Reason:

reached maximum tries (180) after waiting for 180 seconds

fail 5865223 2021-02-07 16:28:19 2021-02-07 17:02:34 2021-02-07 17:26:31 0:23:57 0:12:00 0:11:57 smithi master ubuntu 18.04 rados/cephadm/workunits/{distro/ubuntu_18.04_podman task/test_cephadm} 1
Failure Reason:

Found coredumps on ubuntu@smithi114.front.sepia.ceph.com

fail 5865224 2021-02-07 16:28:20 2021-02-07 17:03:25 2021-02-07 17:32:07 0:28:42 0:11:50 0:16:52 smithi master ubuntu 18.04 rados/singleton/{all/thrash_cache_writeback_proxy_none msgr-failures/few msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_latest}} 2
Failure Reason:

Command crashed: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 400000 --objects 10000 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 600 --op read 100 --op write 50 --op delete 50 --op copy_from 50 --op write_excl 50 --pool base'

fail 5865225 2021-02-07 16:28:21 2021-02-07 17:10:56 2021-02-07 17:28:16 0:17:20 0:10:50 0:06:30 smithi master rhel 8.2 rados/mgr/{clusters/{2-node-mgr} debug/mgr objectstore/bluestore-comp-zlib supported-random-distro$/{rhel_latest} tasks/progress} 2
Failure Reason:

Test failure: test_osd_came_back (tasks.mgr.test_progress.TestProgress)

fail 5865226 2021-02-07 16:28:21 2021-02-07 17:11:26 2021-02-07 17:33:17 0:21:51 0:11:37 0:10:14 smithi master ubuntu 18.04 rados/cephadm/workunits/{distro/ubuntu_18.04_podman task/test_orch_cli} 1
Failure Reason:

Test failure: test_cluster_info (tasks.cephfs.test_nfs.TestNFS)

fail 5865227 2021-02-07 16:28:22 2021-02-07 17:12:06 2021-02-07 17:29:23 0:17:17 0:06:44 0:10:33 smithi master ubuntu 18.04 rados/cephadm/workunits/{distro/ubuntu_18.04_podman task/test_adoption} 1
Failure Reason:

Found coredumps on ubuntu@smithi068.front.sepia.ceph.com