Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 6424639 2021-10-06 14:04:56 2021-10-06 14:05:41 2021-10-06 14:56:02 0:50:21 0:43:15 0:07:06 smithi master ubuntu 20.04 rados/singleton/{all/recovery-preemption mon_election/classic msgr-failures/none msgr/async-v2only objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest}} 1
pass 6424640 2021-10-06 14:04:56 2021-10-06 14:05:41 2021-10-06 14:53:40 0:47:59 0:40:40 0:07:19 smithi master ubuntu 20.04 rados/singleton/{all/recovery-preemption mon_election/classic msgr-failures/none msgr/async-v2only objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest}} 1
pass 6424641 2021-10-06 14:04:56 2021-10-06 14:05:42 2021-10-06 15:16:24 1:10:42 1:03:12 0:07:30 smithi master ubuntu 20.04 rados/singleton/{all/recovery-preemption mon_election/classic msgr-failures/none msgr/async-v2only objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest}} 1
pass 6424642 2021-10-06 14:04:56 2021-10-06 14:05:42 2021-10-06 15:20:53 1:15:11 1:06:20 0:08:51 smithi master ubuntu 20.04 rados/singleton/{all/recovery-preemption mon_election/classic msgr-failures/none msgr/async-v2only objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest}} 1
fail 6424643 2021-10-06 14:04:57 2021-10-06 14:05:42 2021-10-06 21:16:01 7:10:19 6:59:59 0:10:20 smithi master centos 8.3 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-bitmap rados tasks/rados_api_tests validater/valgrind} 2
Failure Reason:

Command failed (workunit test rados/test.sh) on smithi097 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=5e5658653a0ef4bc989b0655a75cd98370f27d41 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 ALLOW_TIMEOUTS=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh'

fail 6424644 2021-10-06 14:04:57 2021-10-06 14:05:42 2021-10-06 21:11:03 7:05:21 6:53:11 0:12:10 smithi master centos 8.3 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-bitmap rados tasks/rados_api_tests validater/valgrind} 2
Failure Reason:

Command failed (workunit test rados/test.sh) on smithi077 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=5e5658653a0ef4bc989b0655a75cd98370f27d41 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 ALLOW_TIMEOUTS=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh'

fail 6424645 2021-10-06 14:04:57 2021-10-06 14:05:43 2021-10-06 21:05:46 7:00:03 6:49:18 0:10:45 smithi master centos 8.3 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-bitmap rados tasks/rados_api_tests validater/valgrind} 2
Failure Reason:

Command failed (workunit test rados/test.sh) on smithi116 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=5e5658653a0ef4bc989b0655a75cd98370f27d41 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 ALLOW_TIMEOUTS=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh'

fail 6424646 2021-10-06 14:04:57 2021-10-06 14:05:44 2021-10-06 21:14:45 7:09:01 6:58:11 0:10:50 smithi master centos 8.3 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-bitmap rados tasks/rados_api_tests validater/valgrind} 2
Failure Reason:

Command failed (workunit test rados/test.sh) on smithi085 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=5e5658653a0ef4bc989b0655a75cd98370f27d41 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 ALLOW_TIMEOUTS=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh'

pass 6424647 2021-10-06 14:04:58 2021-10-06 14:05:44 2021-10-06 14:50:13 0:44:29 0:32:30 0:11:59 smithi master centos 8.2 rados/cephadm/mgr-nfs-upgrade/{0-centos_8.2_container_tools_3.0 1-bootstrap/16.2.5 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
pass 6424648 2021-10-06 14:04:58 2021-10-06 14:05:45 2021-10-06 14:47:24 0:41:39 0:31:18 0:10:21 smithi master centos 8.2 rados/cephadm/mgr-nfs-upgrade/{0-centos_8.2_container_tools_3.0 1-bootstrap/16.2.5 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
pass 6424649 2021-10-06 14:04:58 2021-10-06 14:05:46 2021-10-06 14:47:48 0:42:02 0:30:32 0:11:30 smithi master centos 8.2 rados/cephadm/mgr-nfs-upgrade/{0-centos_8.2_container_tools_3.0 1-bootstrap/16.2.5 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
fail 6424650 2021-10-06 14:04:58 2021-10-06 14:05:47 2021-10-06 14:47:07 0:41:20 0:30:01 0:11:19 smithi master centos 8.2 rados/cephadm/mgr-nfs-upgrade/{0-centos_8.2_container_tools_3.0 1-bootstrap/16.2.5 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

Found coredumps on ubuntu@smithi096.front.sepia.ceph.com

pass 6424651 2021-10-06 14:04:59 2021-10-06 14:05:47 2021-10-06 14:33:43 0:27:56 0:17:29 0:10:27 smithi master centos 8.3 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-lz4 rados tasks/rados_cls_all validater/lockdep} 2
pass 6424652 2021-10-06 14:04:59 2021-10-06 14:05:48 2021-10-06 14:34:57 0:29:09 0:17:27 0:11:42 smithi master centos 8.3 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-lz4 rados tasks/rados_cls_all validater/lockdep} 2
pass 6424653 2021-10-06 14:04:59 2021-10-06 14:05:48 2021-10-06 14:32:50 0:27:02 0:16:53 0:10:09 smithi master centos 8.3 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-lz4 rados tasks/rados_cls_all validater/lockdep} 2
pass 6424654 2021-10-06 14:04:59 2021-10-06 14:05:48 2021-10-06 14:34:15 0:28:27 0:17:10 0:11:17 smithi master centos 8.3 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-lz4 rados tasks/rados_cls_all validater/lockdep} 2
fail 6424655 2021-10-06 14:05:00 2021-10-06 14:05:49 2021-10-06 18:16:59 4:11:10 3:58:41 0:12:29 smithi master centos 8.3 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-zstd rados tasks/rados_cls_all validater/valgrind} 2
Failure Reason:

Command failed (workunit test cls/test_cls_2pc_queue.sh) on smithi129 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=5e5658653a0ef4bc989b0655a75cd98370f27d41 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_2pc_queue.sh'

fail 6424656 2021-10-06 14:05:00 2021-10-06 14:05:50 2021-10-06 18:12:31 4:06:41 3:56:07 0:10:34 smithi master centos 8.3 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-zstd rados tasks/rados_cls_all validater/valgrind} 2
Failure Reason:

Command failed (workunit test cls/test_cls_2pc_queue.sh) on smithi162 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=5e5658653a0ef4bc989b0655a75cd98370f27d41 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_2pc_queue.sh'

fail 6424657 2021-10-06 14:05:00 2021-10-06 14:05:50 2021-10-06 18:17:42 4:11:52 4:01:06 0:10:46 smithi master centos 8.3 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-zstd rados tasks/rados_cls_all validater/valgrind} 2
Failure Reason:

Command failed (workunit test cls/test_cls_2pc_queue.sh) on smithi071 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=5e5658653a0ef4bc989b0655a75cd98370f27d41 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_2pc_queue.sh'

fail 6424658 2021-10-06 14:05:00 2021-10-06 14:05:50 2021-10-06 18:16:50 4:11:00 3:58:51 0:12:09 smithi master centos 8.3 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-zstd rados tasks/rados_cls_all validater/valgrind} 2
Failure Reason:

Command failed (workunit test cls/test_cls_2pc_queue.sh) on smithi124 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=5e5658653a0ef4bc989b0655a75cd98370f27d41 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_2pc_queue.sh'

pass 6424659 2021-10-06 14:05:01 2021-10-06 14:05:51 2021-10-06 14:47:50 0:41:59 0:30:39 0:11:20 smithi master centos 8.2 rados/cephadm/mgr-nfs-upgrade/{0-centos_8.2_container_tools_3.0 1-bootstrap/16.2.5 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
pass 6424660 2021-10-06 14:05:01 2021-10-06 14:05:51 2021-10-06 14:48:12 0:42:21 0:31:52 0:10:29 smithi master centos 8.2 rados/cephadm/mgr-nfs-upgrade/{0-centos_8.2_container_tools_3.0 1-bootstrap/16.2.5 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
fail 6424661 2021-10-06 14:05:01 2021-10-06 14:05:51 2021-10-06 14:24:15 0:18:24 0:08:41 0:09:43 smithi master centos 8.2 rados/cephadm/mgr-nfs-upgrade/{0-centos_8.2_container_tools_3.0 1-bootstrap/16.2.5 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

Command failed on smithi041 with status 126: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v16.2.5 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 6a065c50-26b0-11ec-8c25-001a4aab830c -- ceph mon dump -f json'

pass 6424662 2021-10-06 14:05:01 2021-10-06 14:05:51 2021-10-06 14:47:59 0:42:08 0:30:21 0:11:47 smithi master centos 8.2 rados/cephadm/mgr-nfs-upgrade/{0-centos_8.2_container_tools_3.0 1-bootstrap/16.2.5 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
fail 6424663 2021-10-06 14:05:02 2021-10-06 14:05:52 2021-10-06 14:42:55 0:37:03 0:27:48 0:09:15 smithi master ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 1-rook 2-workload/none 3-final cluster/3-node k8s/1.21 net/calico rook/master} 3
Failure Reason:

'check osd count' reached maximum tries (90) after waiting for 900 seconds

fail 6424664 2021-10-06 14:05:02 2021-10-06 14:05:53 2021-10-06 14:42:59 0:37:06 0:27:45 0:09:21 smithi master ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 1-rook 2-workload/none 3-final cluster/3-node k8s/1.21 net/calico rook/master} 3
Failure Reason:

'check osd count' reached maximum tries (90) after waiting for 900 seconds

fail 6424665 2021-10-06 14:05:02 2021-10-06 14:05:53 2021-10-06 14:42:25 0:36:32 0:28:56 0:07:36 smithi master ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 1-rook 2-workload/none 3-final cluster/3-node k8s/1.21 net/calico rook/master} 3
Failure Reason:

'check osd count' reached maximum tries (90) after waiting for 900 seconds

fail 6424666 2021-10-06 14:05:02 2021-10-06 14:05:54 2021-10-06 14:42:53 0:36:59 0:28:29 0:08:30 smithi master ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 1-rook 2-workload/none 3-final cluster/3-node k8s/1.21 net/calico rook/master} 3
Failure Reason:

'check osd count' reached maximum tries (90) after waiting for 900 seconds

pass 6424667 2021-10-06 14:05:03 2021-10-06 14:05:54 2021-10-06 16:55:36 2:49:42 2:18:52 0:30:50 smithi master centos 8.stream rados/objectstore/{backends/objectstore-bluestore-b supported-random-distro$/{centos_8.stream}} 1
pass 6424668 2021-10-06 14:05:03 2021-10-06 14:05:54 2021-10-06 16:58:54 2:53:00 2:19:53 0:33:07 smithi master centos 8.stream rados/objectstore/{backends/objectstore-bluestore-b supported-random-distro$/{centos_8.stream}} 1
pass 6424669 2021-10-06 14:05:03 2021-10-06 14:05:54 2021-10-06 16:54:37 2:48:43 2:18:13 0:30:30 smithi master centos 8.stream rados/objectstore/{backends/objectstore-bluestore-b supported-random-distro$/{centos_8.stream}} 1
pass 6424670 2021-10-06 14:05:03 2021-10-06 14:05:55 2021-10-06 16:48:38 2:42:43 2:14:04 0:28:39 smithi master centos 8.stream rados/objectstore/{backends/objectstore-bluestore-b supported-random-distro$/{centos_8.stream}} 1