Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 5967129 2021-03-15 09:33:45 2021-03-15 09:34:42 2021-03-15 10:02:10 0:27:28 0:20:00 0:07:28 smithi master ubuntu 20.04 rados/cephadm/workunits/{0-distro/ubuntu_20.04_kubic_stable mon_election/classic task/test_orch_cli} 1
Failure Reason:

Test failure: test_cephfs_mirror (tasks.cephadm_cases.test_cli.TestCephadmCLI)

fail 5967130 2021-03-15 09:33:47 2021-03-15 09:34:42 2021-03-15 09:56:05 0:21:23 0:12:50 0:08:33 smithi master ubuntu 20.04 rados/cephadm/dashboard/{0-distro/ubuntu_20.04_kubic_stable task/test_e2e} 2
Failure Reason:

Command failed (workunit test cephadm/create_iscsi_disks.sh) on smithi169 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && cd -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=86d577d82dbe15dd1793647ff36565039110c835 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="1" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.1 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.1/qa/workunits/cephadm/create_iscsi_disks.sh'

dead 5967131 2021-03-15 09:33:49 2021-03-15 09:36:23 2021-03-15 14:32:37 4:56:14 smithi master ubuntu 20.04 rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04 2-repo_digest/repo_digest 3-start-upgrade 4-wait mon_election/connectivity} 2
fail 5967132 2021-03-15 09:33:51 2021-03-15 09:36:33 2021-03-15 09:56:39 0:20:06 0:12:45 0:07:21 smithi master ubuntu 20.04 rados/cephadm/dashboard/{0-distro/ubuntu_20.04_kubic_testing task/test_e2e} 2
Failure Reason:

Command failed (workunit test cephadm/create_iscsi_disks.sh) on smithi066 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && cd -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=86d577d82dbe15dd1793647ff36565039110c835 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="1" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.1 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.1/qa/workunits/cephadm/create_iscsi_disks.sh'

dead 5967133 2021-03-15 09:33:53 2021-03-15 09:36:33 2021-03-15 14:33:15 4:56:42 smithi master ubuntu 20.04 rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04 2-repo_digest/defaut 3-start-upgrade 4-wait mon_election/connectivity} 2
fail 5967134 2021-03-15 09:33:55 2021-03-15 09:37:34 2021-03-15 10:04:52 0:27:18 0:19:58 0:07:20 smithi master ubuntu 20.04 rados/cephadm/workunits/{0-distro/ubuntu_20.04_kubic_testing mon_election/connectivity task/test_orch_cli} 1
Failure Reason:

Test failure: test_cephfs_mirror (tasks.cephadm_cases.test_cli.TestCephadmCLI)

pass 5967135 2021-03-15 09:33:57 2021-03-15 09:37:34 2021-03-15 11:38:30 2:00:56 1:53:07 0:07:49 smithi master centos 8.3 rados/standalone/{mon_election/classic supported-random-distro$/{centos_8} workloads/scrub} 1
dead 5967136 2021-03-15 09:33:59 2021-03-15 09:37:44 2021-03-15 14:33:30 4:55:46 smithi master centos 8.3 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/filestore-xfs rados tasks/rados_api_tests validater/valgrind} 2
dead 5967137 2021-03-15 09:34:00 2021-03-15 09:37:55 2021-03-15 14:33:30 4:55:35 smithi master ubuntu 20.04 rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04 2-repo_digest/defaut 3-start-upgrade 4-wait mon_election/classic} 2
fail 5967138 2021-03-15 09:34:01 2021-03-15 09:40:05 2021-03-15 09:59:05 0:19:00 0:12:46 0:06:14 smithi master ubuntu 20.04 rados/cephadm/dashboard/{0-distro/ubuntu_20.04_kubic_stable task/test_e2e} 2
Failure Reason:

Command failed (workunit test cephadm/create_iscsi_disks.sh) on smithi060 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && cd -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=86d577d82dbe15dd1793647ff36565039110c835 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="1" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.1 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.1/qa/workunits/cephadm/create_iscsi_disks.sh'

pass 5967139 2021-03-15 09:34:02 2021-03-15 09:40:05 2021-03-15 12:43:50 3:03:45 2:53:26 0:10:19 smithi master centos 8.3 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-lz4 rados tasks/rados_cls_all validater/valgrind} 2
dead 5967140 2021-03-15 09:34:03 2021-03-15 09:40:36 2021-03-15 14:32:41 4:52:05 smithi master ubuntu 20.04 rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04 2-repo_digest/repo_digest 3-start-upgrade 4-wait mon_election/classic} 2
fail 5967141 2021-03-15 09:34:04 2021-03-15 09:40:46 2021-03-15 10:09:20 0:28:34 0:20:05 0:08:29 smithi master ubuntu 20.04 rados/cephadm/workunits/{0-distro/ubuntu_20.04_kubic_testing mon_election/classic task/test_orch_cli} 1
Failure Reason:

Test failure: test_cephfs_mirror (tasks.cephadm_cases.test_cli.TestCephadmCLI)

pass 5967142 2021-03-15 09:34:07 2021-03-15 09:41:46 2021-03-15 10:21:36 0:39:50 0:33:05 0:06:45 smithi master ubuntu 20.04 rados/cephadm/thrash/{0-distro/ubuntu_20.04_kubic_testing 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async-v2only root} 2
pass 5967143 2021-03-15 09:34:08 2021-03-15 09:41:47 2021-03-15 10:06:52 0:25:05 0:17:52 0:07:13 smithi master rhel 8.3 rados/singleton/{all/radostool mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-zstd rados supported-random-distro$/{rhel_8}} 1
fail 5967144 2021-03-15 09:34:09 2021-03-15 09:41:57 2021-03-15 10:02:56 0:20:59 0:14:32 0:06:27 smithi master ubuntu 20.04 rados/cephadm/dashboard/{0-distro/ubuntu_20.04_kubic_testing task/test_e2e} 2
Failure Reason:

Command failed (workunit test cephadm/create_iscsi_disks.sh) on smithi043 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && cd -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=86d577d82dbe15dd1793647ff36565039110c835 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="1" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.1 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.1/qa/workunits/cephadm/create_iscsi_disks.sh'

dead 5967145 2021-03-15 09:34:11 2021-03-15 09:41:57 2021-03-15 14:34:10 4:52:13 smithi master centos 8.3 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-zlib rados tasks/rados_api_tests validater/valgrind} 2
pass 5967146 2021-03-15 09:34:12 2021-03-15 09:42:47 2021-03-15 10:31:15 0:48:28 0:40:30 0:07:58 smithi master ubuntu 20.04 rados/cephadm/thrash/{0-distro/ubuntu_20.04_kubic_stable 1-start 2-thrash 3-tasks/snaps-few-objects fixed-2 msgr/async root} 2