User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail |
---|---|---|---|---|---|---|---|---|---|---|
adking | 2023-04-04 05:19:12 | 2023-04-04 05:19:23 | 2023-04-05 04:58:51 | 23:39:28 | orch:cephadm | wip-adk-testing-2023-04-03-1541 | smithi | 27e1ef4 | 12 | 6 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
pass | 7230589 | 2023-04-04 05:19:16 | 2023-04-04 05:19:17 | 2023-04-04 06:34:49 | 1:15:32 | 1:02:25 | 0:13:07 | smithi | main | ubuntu | 20.04 | orch:cephadm/thrash/{0-distro/ubuntu_20.04 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async-v1only root} | 2 | |
pass | 7230590 | 2023-04-04 05:19:17 | 2023-04-04 05:19:18 | 2023-04-04 05:46:10 | 0:26:52 | 0:16:11 | 0:10:41 | smithi | main | rhel | 8.6 | orch:cephadm/smoke/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop agent/off fixed-2 mon_election/classic start} | 2 | |
fail | 7230591 | 2023-04-04 05:19:18 | 2023-04-04 05:44:19 | 1091 | smithi | main | rhel | 8.6 | orch:cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-services/nfs-keepalive-only 3-final} | 2 | ||||
Failure Reason:
Command failed on smithi033 with status 32: "sudo TESTDIR=/home/ubuntu/cephtest bash -ex -c 'mount -t nfs 10.0.31.33:/fake /mnt/foo'" |
||||||||||||||
pass | 7230592 | 2023-04-04 05:19:19 | 2023-04-04 05:19:19 | 2023-04-04 07:20:26 | 2:01:07 | 1:28:27 | 0:32:40 | smithi | main | centos | 8.stream | orch:cephadm/with-work/{0-distro/centos_8.stream_container_tools_crun fixed-2 mode/packaged mon_election/classic msgr/async-v1only start tasks/rados_python} | 2 | |
pass | 7230593 | 2023-04-04 05:19:20 | 2023-04-04 05:19:20 | 2023-04-04 07:28:29 | 2:09:09 | 1:38:53 | 0:30:16 | smithi | main | centos | 8.stream | orch:cephadm/thrash/{0-distro/centos_8.stream_container_tools_crun 1-start 2-thrash 3-tasks/snaps-few-objects fixed-2 msgr/async root} | 2 | |
fail | 7230594 | 2023-04-04 05:19:21 | 2023-04-04 06:28:19 | 2229 | smithi | main | centos | 8.stream | orch:cephadm/upgrade/{1-start-distro/1-start-centos_8.stream_container-tools 2-repo_digest/defaut 3-upgrade/staggered 4-wait 5-upgrade-ls agent/on mon_election/classic} | 2 | ||||
Failure Reason:
Command failed on smithi047 with status 2: 'sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v16.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c8a50dae-d2af-11ed-9aff-001a4aab830c -e sha1=27e1ef4e404073fe46b21aac3f1ae821adacafcc -- bash -c \'ceph orch ps --format json | jq --arg TARGET_ID "$TARGET_ID" -e \'"\'"\'.[] | select(.daemon_type=="mgr") | select(.container_image_id==$TARGET_ID)\'"\'"\' | jq -s > out.json\'' |
||||||||||||||
fail | 7230595 | 2023-04-04 05:19:22 | 2023-04-04 05:50:19 | 1238 | smithi | main | rhel | 8.6 | orch:cephadm/workunits/{0-distro/rhel_8.6_container_tools_rhel8 agent/on mon_election/connectivity task/test_set_mon_crush_locations} | 3 | ||||
Failure Reason:
Command failed on smithi027 with status 4: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:27e1ef4e404073fe46b21aac3f1ae821adacafcc shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 3e6599b4-d2ab-11ed-9aff-001a4aab830c -- bash -c \'set -ex\n# since we don\'"\'"\'t know the real hostnames before the test, the next\n# bit is in order to replace the fake hostnames "host.a/b/c" with\n# the actual names cephadm knows the host by within the mon spec\nceph orch host ls --format json | jq -r \'"\'"\'.[] | .hostname\'"\'"\' > realnames\necho $\'"\'"\'host.a\\nhost.b\\nhost.c\'"\'"\' > fakenames\necho $\'"\'"\'{datacenter=a}\\n{datacenter=b,rack=2}\\n{datacenter=a,rack=3}\'"\'"\' > crush_locs\nceph orch ls --service-name mon --export > mon.yaml\nMONSPEC=`cat mon.yaml`\necho "$MONSPEC"\nwhile read realname <&3 && read fakename <&4; do\n MONSPEC="${MONSPEC//$fakename/$realname}"\ndone 3<realnames 4<fakenames\necho "$MONSPEC" > mon.yaml\ncat mon.yaml\n# now the spec should have the real hostnames, so let\'"\'"\'s re-apply\nceph orch apply -i mon.yaml\nsleep 90\nceph orch ps --refresh\nceph orch ls --service-name mon --export > mon.yaml; ceph orch apply -i mon.yaml\nsleep 90\nceph mon dump\nceph mon dump --format json\n# verify all the crush locations got set from "ceph mon dump" output\nwhile read hostname <&3 && read crushloc <&4; do\n ceph mon dump --format json | jq --arg hostname "$hostname" --arg crushloc "$crushloc" -e \'"\'"\'.mons | .[] | select(.name == $hostname) | .crush_location == $crushloc\'"\'"\'\ndone 3<realnames 4<crush_locs\n\'' |
||||||||||||||
pass | 7230596 | 2023-04-04 05:19:23 | 2023-04-04 05:19:23 | 2023-04-04 06:44:10 | 1:24:47 | 0:56:09 | 0:28:38 | smithi | main | centos | 8.stream | orch:cephadm/mgr-nfs-upgrade/{0-centos_8.stream_container_tools 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |
pass | 7230597 | 2023-04-04 05:19:24 | 2023-04-04 05:19:24 | 2023-04-04 06:23:14 | 1:03:50 | 0:31:05 | 0:32:45 | smithi | main | centos | 8.stream | orch:cephadm/smoke/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop agent/off fixed-2 mon_election/connectivity start} | 2 | |
pass | 7230598 | 2023-04-04 05:19:25 | 2023-04-04 06:16:58 | 2525 | smithi | main | ubuntu | 20.04 | orch:cephadm/thrash/{0-distro/ubuntu_20.04 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async root} | 2 | ||||
pass | 7230599 | 2023-04-04 05:19:27 | 2023-04-04 05:19:27 | 2023-04-04 07:34:06 | 2:14:39 | 1:43:13 | 0:31:26 | smithi | main | centos | 8.stream | orch:cephadm/with-work/{0-distro/centos_8.stream_container_tools_crun fixed-2 mode/root mon_election/connectivity msgr/async start tasks/rados_api_tests} | 2 | |
pass | 7230600 | 2023-04-04 05:19:28 | 2023-04-04 05:19:28 | 2023-04-04 06:19:59 | 1:00:31 | 0:30:34 | 0:29:57 | smithi | main | centos | 8.stream | orch:cephadm/osds/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-ops/rm-zap-flag} | 2 | |
pass | 7230601 | 2023-04-04 05:19:29 | 2023-04-04 05:19:29 | 2023-04-04 06:45:17 | 1:25:48 | 0:55:02 | 0:30:46 | smithi | main | centos | 8.stream | orch:cephadm/mgr-nfs-upgrade/{0-centos_8.stream_container_tools 1-bootstrap/16.2.5 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |
fail | 7230602 | 2023-04-04 05:19:30 | 2023-04-04 05:19:30 | 2023-04-04 05:47:21 | 0:27:51 | 0:19:14 | 0:08:37 | smithi | main | rhel | 8.6 | orch:cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop 1-start 2-services/nfs-keepalive-only 3-final} | 2 | |
Failure Reason:
Command failed on smithi077 with status 32: "sudo TESTDIR=/home/ubuntu/cephtest bash -ex -c 'mount -t nfs 10.0.31.77:/fake /mnt/foo'" |
||||||||||||||
pass | 7230603 | 2023-04-04 05:19:31 | 2023-04-04 05:19:31 | 2023-04-04 07:23:45 | 2:04:14 | 1:31:02 | 0:33:12 | smithi | main | centos | 8.stream | orch:cephadm/workunits/{0-distro/centos_8.stream_container_tools agent/on mon_election/connectivity task/test_orch_cli_mon} | 5 | |
fail | 7230604 | 2023-04-04 05:19:32 | 2023-04-04 05:56:07 | 1394 | smithi | main | ubuntu | 20.04 | orch:cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04 2-repo_digest/defaut 3-upgrade/staggered 4-wait 5-upgrade-ls agent/on mon_election/connectivity} | 2 | ||||
Failure Reason:
Command failed on smithi118 with status 2: 'sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v16.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid ce2249fe-d2aa-11ed-9aff-001a4aab830c -e sha1=27e1ef4e404073fe46b21aac3f1ae821adacafcc -- bash -c \'ceph orch ps --format json | jq --arg TARGET_ID "$TARGET_ID" -e \'"\'"\'.[] | select(.daemon_type=="mgr") | select(.container_image_id==$TARGET_ID)\'"\'"\' | jq -s > out.json\'' |
||||||||||||||
fail | 7230605 | 2023-04-04 05:19:33 | 2023-04-04 05:19:33 | 2023-04-05 04:58:51 | 23:39:18 | 9:30:24 | 14:08:54 | smithi | main | rhel | 8.6 | orch:cephadm/workunits/{0-distro/rhel_8.6_container_tools_3.0 agent/on mon_election/connectivity task/test_set_mon_crush_locations} | 3 | |
Failure Reason:
Command failed on smithi141 with status 4: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:27e1ef4e404073fe46b21aac3f1ae821adacafcc shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid a4ce6fee-d320-11ed-9aff-001a4aab830c -- bash -c \'set -ex\n# since we don\'"\'"\'t know the real hostnames before the test, the next\n# bit is in order to replace the fake hostnames "host.a/b/c" with\n# the actual names cephadm knows the host by within the mon spec\nceph orch host ls --format json | jq -r \'"\'"\'.[] | .hostname\'"\'"\' > realnames\necho $\'"\'"\'host.a\\nhost.b\\nhost.c\'"\'"\' > fakenames\necho $\'"\'"\'{datacenter=a}\\n{datacenter=b,rack=2}\\n{datacenter=a,rack=3}\'"\'"\' > crush_locs\nceph orch ls --service-name mon --export > mon.yaml\nMONSPEC=`cat mon.yaml`\necho "$MONSPEC"\nwhile read realname <&3 && read fakename <&4; do\n MONSPEC="${MONSPEC//$fakename/$realname}"\ndone 3<realnames 4<fakenames\necho "$MONSPEC" > mon.yaml\ncat mon.yaml\n# now the spec should have the real hostnames, so let\'"\'"\'s re-apply\nceph orch apply -i mon.yaml\nsleep 90\nceph orch ps --refresh\nceph orch ls --service-name mon --export > mon.yaml; ceph orch apply -i mon.yaml\nsleep 90\nceph mon dump\nceph mon dump --format json\n# verify all the crush locations got set from "ceph mon dump" output\nwhile read hostname <&3 && read crushloc <&4; do\n ceph mon dump --format json | jq --arg hostname "$hostname" --arg crushloc "$crushloc" -e \'"\'"\'.mons | .[] | select(.name == $hostname) | .crush_location == $crushloc\'"\'"\'\ndone 3<realnames 4<crush_locs\n\'' |
||||||||||||||
pass | 7230606 | 2023-04-04 05:19:34 | 2023-04-04 05:19:34 | 2023-04-04 07:25:13 | 2:05:39 | 1:33:41 | 0:31:58 | smithi | main | centos | 8.stream | orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 |