Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 6955457 2022-08-02 13:03:37 2022-08-02 13:03:38 2022-08-02 13:33:08 0:29:30 0:18:40 0:10:50 smithi main ubuntu 20.04 orch:cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/iscsi 3-final} 2
pass 6955458 2022-08-02 13:03:38 2022-08-02 13:03:39 2022-08-02 13:40:38 0:36:59 0:31:13 0:05:46 smithi main centos 8.stream orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn syntax whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
pass 6955459 2022-08-02 13:03:39 2022-08-02 13:03:39 2022-08-02 13:42:31 0:38:52 0:32:42 0:06:10 smithi main centos 8.stream orch:cephadm/thrash/{0-distro/centos_8.stream_container_tools_crun 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async-v1only root} 2
pass 6955460 2022-08-02 13:03:40 2022-08-02 13:03:40 2022-08-02 13:27:18 0:23:38 0:17:42 0:05:56 smithi main rhel 8.4 orch:cephadm/orchestrator_cli/{0-random-distro$/{rhel_8.4_container_tools_rhel8} 2-node-mgr agent/off orchestrator_cli} 2
pass 6955461 2022-08-02 13:03:41 2022-08-02 13:03:41 2022-08-02 13:46:47 0:43:06 0:37:09 0:05:57 smithi main centos 8.stream orch:cephadm/rbd_iscsi/{0-single-container-host base/install cluster/{fixed-3 openstack} workloads/cephadm_iscsi} 3
fail 6955462 2022-08-02 13:03:42 2022-08-02 13:03:42 2022-08-02 13:15:13 0:11:31 0:05:38 0:05:53 smithi main orch:cephadm/workunits/{agent/on mon_election/connectivity task/test_cephadm_repos} 1
Failure Reason:

Command failed (workunit test cephadm/test_repos.sh) on smithi073 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=e90d7284289032feaae2fb612c2b15114864f1f1 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_repos.sh'

pass 6955463 2022-08-02 13:03:43 2022-08-02 13:03:43 2022-08-02 13:40:52 0:37:09 0:31:16 0:05:53 smithi main centos 8.stream orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn syntax whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/pacific 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
pass 6955464 2022-08-02 13:03:44 2022-08-02 13:03:44 2022-08-02 13:50:47 0:47:03 0:40:47 0:06:16 smithi main rhel 8.4 orch:cephadm/thrash/{0-distro/rhel_8.4_container_tools_3.0 1-start 2-thrash 3-tasks/snaps-few-objects fixed-2 msgr/async-v2only root} 2
pass 6955465 2022-08-02 13:03:45 2022-08-02 13:03:45 2022-08-02 13:38:05 0:34:20 0:27:07 0:07:13 smithi main rhel 8.4 orch:cephadm/with-work/{0-distro/rhel_8.4_container_tools_3.0 fixed-2 mode/root mon_election/connectivity msgr/async-v2only start tasks/rados_python} 2
pass 6955466 2022-08-02 13:03:46 2022-08-02 13:03:46 2022-08-02 13:42:18 0:38:32 0:26:22 0:12:10 smithi main ubuntu 20.04 orch:cephadm/smoke/{0-distro/ubuntu_20.04 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} 2
pass 6955467 2022-08-02 13:03:47 2022-08-02 13:03:47 2022-08-02 13:29:33 0:25:46 0:19:16 0:06:30 smithi main orch:cephadm/workunits/{agent/on mon_election/connectivity task/test_orch_cli} 1
pass 6955468 2022-08-02 13:03:48 2022-08-02 13:03:48 2022-08-02 13:32:56 0:29:08 0:22:01 0:07:07 smithi main rhel 8.4 orch:cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_3.0 0-nvme-loop 1-start 2-services/rgw-ingress 3-final} 2
pass 6955469 2022-08-02 13:03:49 2022-08-02 13:03:49 2022-08-02 13:41:18 0:37:29 0:30:12 0:07:17 smithi main ubuntu 20.04 orch:cephadm/with-work/{0-distro/ubuntu_20.04 fixed-2 mode/root mon_election/connectivity msgr/async-v1only start tasks/rados_python} 2
pass 6955470 2022-08-02 13:03:50 2022-08-02 13:03:50 2022-08-02 13:43:34 0:39:44 0:31:25 0:08:19 smithi main centos 8.stream orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn syntax whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
pass 6955471 2022-08-02 13:03:51 2022-08-02 13:03:51 2022-08-02 13:27:39 0:23:48 0:16:25 0:07:23 smithi main centos 8.stream orch:cephadm/orchestrator_cli/{0-random-distro$/{centos_8.stream_container_tools_crun} 2-node-mgr agent/on orchestrator_cli} 2
pass 6955472 2022-08-02 13:03:52 2022-08-02 13:03:52 2022-08-02 13:43:18 0:39:26 0:31:50 0:07:36 smithi main centos 8.stream orch:cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async-v2only root} 2
fail 6955473 2022-08-02 13:03:53 2022-08-02 13:03:53 2022-08-02 13:15:26 0:11:33 0:05:36 0:05:57 smithi main orch:cephadm/workunits/{agent/on mon_election/classic task/test_cephadm_repos} 1
Failure Reason:

Command failed (workunit test cephadm/test_repos.sh) on smithi186 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=e90d7284289032feaae2fb612c2b15114864f1f1 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_repos.sh'