Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 6532050 2021-11-28 15:48:37 2021-11-28 16:11:49 2021-11-28 16:29:14 0:17:25 0:06:47 0:10:38 smithi master ubuntu 18.04 rados/perf/{ceph mon_election/connectivity objectstore/bluestore-basic-min-osd-mem-target openstack scheduler/wpq_default_shards settings/optimized ubuntu_18.04 workloads/sample_fio} 1
Failure Reason:

Command failed on smithi060 with status 2: 'cd /home/ubuntu/cephtest/fio && ./configure && make'

dead 6532052 2021-11-28 15:48:38 2021-11-28 16:12:30 2021-11-28 16:31:17 0:18:47 smithi master ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/nautilus-v2only backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/few rados thrashers/mapgap thrashosds-health workloads/snaps-few-objects} 3
Failure Reason:

Error reimaging machines: reached maximum tries (60) after waiting for 900 seconds

fail 6532054 2021-11-28 15:48:39 2021-11-28 16:18:51 2021-11-28 16:52:00 0:33:09 0:23:35 0:09:34 smithi master centos 8.2 rados/dashboard/{centos_8.2_container_tools_3.0 clusters/{2-node-mgr} debug/mgr mon_election/classic random-objectstore$/{bluestore-hybrid} supported-random-distro$/{centos_8} tasks/dashboard} 2
Failure Reason:

Test failure: test_ganesha (unittest.loader._FailedTest)

fail 6532056 2021-11-28 15:48:40 2021-11-28 16:24:43 2021-11-28 16:40:09 0:15:26 0:06:20 0:09:06 smithi master centos 8.stream rados/cephadm/mgr-nfs-upgrade/{0-centos_8.3_container_tools_3.0 0-centos_8.stream_container_tools 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

Command failed on smithi097 with status 1: 'TESTDIR=/home/ubuntu/cephtest bash -s'

fail 6532058 2021-11-28 15:48:41 2021-11-28 16:26:55 2021-11-28 16:28:54 0:01:59 0 smithi master rados/cephadm/osds/2-ops/rm-zap-flag
Failure Reason:

list index out of range

pass 6532060 2021-11-28 15:48:42 2021-11-28 16:26:54 2021-11-28 17:09:05 0:42:11 0:31:13 0:10:58 smithi master centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
fail 6532062 2021-11-28 15:48:43 2021-11-28 16:28:25 2021-11-28 16:44:05 0:15:40 0:06:47 0:08:53 smithi master ubuntu 18.04 rados/perf/{ceph mon_election/connectivity objectstore/bluestore-low-osd-mem-target openstack scheduler/dmclock_1Shard_16Threads settings/optimized ubuntu_18.04 workloads/fio_4K_rand_read} 1
Failure Reason:

Command failed on smithi137 with status 2: 'cd /home/ubuntu/cephtest/fio && ./configure && make'

fail 6532063 2021-11-28 15:48:44 2021-11-28 16:28:35 2021-11-28 16:45:53 0:17:18 0:06:18 0:11:00 smithi master centos 8.stream rados/cephadm/mgr-nfs-upgrade/{0-centos_8.3_container_tools_3.0 0-centos_8.stream_container_tools 1-bootstrap/16.2.5 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

Command failed on smithi117 with status 1: 'TESTDIR=/home/ubuntu/cephtest bash -s'

fail 6532065 2021-11-28 15:48:45 2021-11-28 16:31:36 2021-11-28 16:47:08 0:15:32 0:06:52 0:08:40 smithi master ubuntu 18.04 rados/perf/{ceph mon_election/classic objectstore/bluestore-stupid openstack scheduler/dmclock_default_shards settings/optimized ubuntu_18.04 workloads/fio_4K_rand_rw} 1
Failure Reason:

Command failed on smithi040 with status 2: 'cd /home/ubuntu/cephtest/fio && ./configure && make'

fail 6532067 2021-11-28 15:48:46 2021-11-28 16:35:37 2021-11-28 16:52:08 0:16:31 0:06:51 0:09:40 smithi master ubuntu 18.04 rados/perf/{ceph mon_election/connectivity objectstore/bluestore-basic-min-osd-mem-target openstack scheduler/wpq_default_shards settings/optimized ubuntu_18.04 workloads/fio_4M_rand_read} 1
Failure Reason:

Command failed on smithi060 with status 2: 'cd /home/ubuntu/cephtest/fio && ./configure && make'

fail 6532069 2021-11-28 15:48:47 2021-11-28 16:37:28 2021-11-28 16:55:26 0:17:58 0:06:52 0:11:06 smithi master ubuntu 18.04 rados/perf/{ceph mon_election/classic objectstore/bluestore-bitmap openstack scheduler/dmclock_1Shard_16Threads settings/optimized ubuntu_18.04 workloads/fio_4M_rand_rw} 1
Failure Reason:

Command failed on smithi063 with status 2: 'cd /home/ubuntu/cephtest/fio && ./configure && make'

fail 6532071 2021-11-28 15:48:48 2021-11-28 16:38:19 2021-11-28 16:55:15 0:16:56 0:06:37 0:10:19 smithi master centos 8.stream rados/cephadm/mgr-nfs-upgrade/{0-centos_8.3_container_tools_3.0 0-centos_8.stream_container_tools 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

Command failed on smithi099 with status 1: 'TESTDIR=/home/ubuntu/cephtest bash -s'

fail 6532073 2021-11-28 15:48:49 2021-11-28 16:38:30 2021-11-28 16:56:01 0:17:31 0:06:56 0:10:35 smithi master ubuntu 18.04 rados/perf/{ceph mon_election/connectivity objectstore/bluestore-comp openstack scheduler/dmclock_default_shards settings/optimized ubuntu_18.04 workloads/fio_4M_rand_write} 1
Failure Reason:

Command failed on smithi073 with status 2: 'cd /home/ubuntu/cephtest/fio && ./configure && make'

fail 6532074 2021-11-28 15:48:50 2021-11-28 16:38:30 2021-11-28 16:53:22 0:14:52 0:03:27 0:11:25 smithi master ubuntu 20.04 rados/dashboard/{centos_8.2_container_tools_3.0 clusters/{2-node-mgr} debug/mgr mon_election/connectivity random-objectstore$/{bluestore-comp-zlib} supported-random-distro$/{ubuntu_latest} tasks/dashboard} 2
Failure Reason:

Command failed on smithi197 with status 1: 'TESTDIR=/home/ubuntu/cephtest bash -s'

fail 6532075 2021-11-28 15:48:51 2021-11-28 16:38:51 2021-11-28 16:56:50 0:17:59 0:06:25 0:11:34 smithi master centos 8.stream rados/cephadm/mgr-nfs-upgrade/{0-centos_8.3_container_tools_3.0 0-centos_8.stream_container_tools 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

Command failed on smithi097 with status 1: 'TESTDIR=/home/ubuntu/cephtest bash -s'

pass 6532076 2021-11-28 15:48:52 2021-11-28 16:40:11 2021-11-28 17:31:20 0:51:09 0:39:47 0:11:22 smithi master rhel 8.4 rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/few objectstore/bluestore-hybrid rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{rhel_8} thrashers/mapgap thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} 3
fail 6532077 2021-11-28 15:48:53 2021-11-28 16:46:04 2021-11-28 16:48:03 0:01:59 0 smithi master rados/cephadm/osds/2-ops/rm-zap-flag
Failure Reason:

list index out of range

dead 6532078 2021-11-28 15:48:54 2021-11-28 16:46:02 2021-11-28 17:01:49 0:15:47 smithi master centos 8.stream rados/cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools 1-start 2-services/nfs-ingress-rgw-bucket 3-final} 2
Failure Reason:

Error reimaging machines: reached maximum tries (60) after waiting for 900 seconds

pass 6532079 2021-11-28 15:48:55 2021-11-28 16:46:53 2021-11-28 17:29:11 0:42:18 0:28:08 0:14:10 smithi master centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
fail 6532080 2021-11-28 15:48:56 2021-11-28 16:51:44 2021-11-28 23:29:51 6:38:07 6:28:00 0:10:07 smithi master centos 8.2 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-low-osd-mem-target rados tasks/rados_api_tests validater/valgrind} 2
Failure Reason:

Command failed (workunit test rados/test.sh) on smithi094 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=fdc003bc12f1b2443c4596eeacb32cf62e806970 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 ALLOW_TIMEOUTS=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh'

fail 6532081 2021-11-28 15:48:57 2021-11-28 16:51:44 2021-11-28 17:05:59 0:14:15 0:06:35 0:07:40 smithi master centos 8.stream rados/cephadm/mgr-nfs-upgrade/{0-centos_8.3_container_tools_3.0 0-centos_8.stream_container_tools 1-bootstrap/16.2.5 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

Command failed on smithi040 with status 1: 'TESTDIR=/home/ubuntu/cephtest bash -s'

pass 6532082 2021-11-28 15:48:58 2021-11-28 16:51:44 2021-11-28 17:28:52 0:37:08 0:30:09 0:06:59 smithi master rhel 8.4 rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{rhel_8} tasks/rados_api_tests} 2
fail 6532083 2021-11-28 15:48:59 2021-11-28 16:52:05 2021-11-28 17:06:43 0:14:38 0:06:48 0:07:50 smithi master ubuntu 18.04 rados/perf/{ceph mon_election/classic objectstore/bluestore-stupid openstack scheduler/wpq_default_shards settings/optimized ubuntu_18.04 workloads/sample_fio} 1
Failure Reason:

Command failed on smithi060 with status 2: 'cd /home/ubuntu/cephtest/fio && ./configure && make'

fail 6532084 2021-11-28 15:48:59 2021-11-28 16:52:15 2021-11-28 17:07:41 0:15:26 0:06:19 0:09:07 smithi master centos 8.stream rados/cephadm/mgr-nfs-upgrade/{0-centos_8.3_container_tools_3.0 0-centos_8.stream_container_tools 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

Command failed on smithi082 with status 1: 'TESTDIR=/home/ubuntu/cephtest bash -s'