User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
benhanokh | 2021-10-11 18:50:56 | 2021-10-11 18:53:13 | 2021-10-12 01:44:28 | 6:51:15 | rados | gbh_debug_hybrid_alloc_N2 | smithi | b7d4522 | 11 | 4 | 1 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
pass | 6432083 | 2021-10-11 18:52:28 | 2021-10-11 18:53:13 | 2021-10-11 19:58:12 | 1:04:59 | 0:57:53 | 0:07:06 | smithi | master | ubuntu | 20.04 | rados/singleton/{all/recovery-preemption mon_election/classic msgr-failures/none msgr/async-v2only objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 6432084 | 2021-10-11 18:52:28 | 2021-10-11 18:53:13 | 2021-10-11 19:46:24 | 0:53:11 | 0:46:38 | 0:06:33 | smithi | master | ubuntu | 20.04 | rados/singleton/{all/recovery-preemption mon_election/classic msgr-failures/none msgr/async-v2only objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest}} | 1 | |
fail | 6432085 | 2021-10-11 18:52:29 | 2021-10-11 18:53:13 | 2021-10-12 01:40:15 | 6:47:02 | 6:36:47 | 0:10:15 | smithi | master | centos | 8.3 | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-bitmap rados tasks/rados_api_tests validater/valgrind} | 2 | |
Failure Reason:
Command failed (workunit test rados/test.sh) on smithi125 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=b7d4522218de0a42c344468e5f8cd53cb9cb87b3 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 ALLOW_TIMEOUTS=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh' |
||||||||||||||
fail | 6432086 | 2021-10-11 18:52:29 | 2021-10-11 18:53:13 | 2021-10-12 01:44:28 | 6:51:15 | 6:40:49 | 0:10:26 | smithi | master | centos | 8.3 | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-bitmap rados tasks/rados_api_tests validater/valgrind} | 2 | |
Failure Reason:
Command failed (workunit test rados/test.sh) on smithi028 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=b7d4522218de0a42c344468e5f8cd53cb9cb87b3 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 ALLOW_TIMEOUTS=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh' |
||||||||||||||
fail | 6432087 | 2021-10-11 18:52:30 | 2021-10-11 18:53:14 | 2021-10-11 19:32:14 | 0:39:00 | 0:29:42 | 0:09:18 | smithi | master | centos | 8.2 | rados/cephadm/mgr-nfs-upgrade/{0-centos_8.2_container_tools_3.0 1-bootstrap/16.2.5 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |
Failure Reason:
Found coredumps on ubuntu@smithi156.front.sepia.ceph.com |
||||||||||||||
pass | 6432088 | 2021-10-11 18:52:30 | 2021-10-11 18:53:15 | 2021-10-11 19:32:09 | 0:38:54 | 0:29:59 | 0:08:55 | smithi | master | centos | 8.2 | rados/cephadm/mgr-nfs-upgrade/{0-centos_8.2_container_tools_3.0 1-bootstrap/16.2.5 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |
pass | 6432089 | 2021-10-11 18:52:31 | 2021-10-11 18:53:15 | 2021-10-11 19:20:14 | 0:26:59 | 0:16:30 | 0:10:29 | smithi | master | centos | 8.3 | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-lz4 rados tasks/rados_cls_all validater/lockdep} | 2 | |
pass | 6432090 | 2021-10-11 18:52:31 | 2021-10-11 18:53:16 | 2021-10-11 19:20:28 | 0:27:12 | 0:16:30 | 0:10:42 | smithi | master | centos | 8.3 | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-lz4 rados tasks/rados_cls_all validater/lockdep} | 2 | |
pass | 6432091 | 2021-10-11 18:52:32 | 2021-10-11 18:53:16 | 2021-10-11 21:56:16 | 3:03:00 | 2:53:51 | 0:09:09 | smithi | master | centos | 8.3 | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-zstd rados tasks/rados_cls_all validater/valgrind} | 2 | |
pass | 6432092 | 2021-10-11 18:52:32 | 2021-10-11 18:53:16 | 2021-10-11 21:24:57 | 2:31:41 | 2:21:37 | 0:10:04 | smithi | master | centos | 8.3 | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-zstd rados tasks/rados_cls_all validater/valgrind} | 2 | |
pass | 6432093 | 2021-10-11 18:52:33 | 2021-10-11 18:53:18 | 2021-10-11 19:35:21 | 0:42:03 | 0:31:38 | 0:10:25 | smithi | master | centos | 8.2 | rados/cephadm/mgr-nfs-upgrade/{0-centos_8.2_container_tools_3.0 1-bootstrap/16.2.5 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |
pass | 6432094 | 2021-10-11 18:52:33 | 2021-10-11 18:53:18 | 2021-10-11 19:34:02 | 0:40:44 | 0:30:47 | 0:09:57 | smithi | master | centos | 8.2 | rados/cephadm/mgr-nfs-upgrade/{0-centos_8.2_container_tools_3.0 1-bootstrap/16.2.5 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |
fail | 6432095 | 2021-10-11 18:52:34 | 2021-10-11 18:53:18 | 2021-10-11 19:28:57 | 0:35:39 | 0:28:53 | 0:06:46 | smithi | master | ubuntu | 20.04 | rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 1-rook 2-workload/none 3-final cluster/3-node k8s/1.21 net/calico rook/master} | 3 | |
Failure Reason:
'check osd count' reached maximum tries (90) after waiting for 900 seconds |
||||||||||||||
dead | 6432096 | 2021-10-11 18:52:34 | 2021-10-11 18:53:18 | 2021-10-11 20:00:51 | 1:07:33 | smithi | master | ubuntu | 20.04 | rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 1-rook 2-workload/none 3-final cluster/3-node k8s/1.21 net/calico rook/master} | 3 | |||
Failure Reason:
Error reimaging machines: Timeout opening channel. |
||||||||||||||
pass | 6432097 | 2021-10-11 18:52:34 | 2021-10-11 18:53:20 | 2021-10-11 21:43:17 | 2:49:57 | 2:18:48 | 0:31:09 | smithi | master | centos | 8.stream | rados/objectstore/{backends/objectstore-bluestore-b supported-random-distro$/{centos_8.stream}} | 1 | |
pass | 6432098 | 2021-10-11 18:52:34 | 2021-10-11 18:53:20 | 2021-10-11 21:38:45 | 2:45:25 | 2:15:49 | 0:29:36 | smithi | master | centos | 8.stream | rados/objectstore/{backends/objectstore-bluestore-b supported-random-distro$/{centos_8.stream}} | 1 |