User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail |
---|---|---|---|---|---|---|---|---|---|---|
skanta | 2024-09-19 07:49:31 | 2024-09-19 07:52:35 | 2024-09-19 14:43:40 | 6:51:05 | rados | wip-bharath11-testing-2024-09-18-1115-quincy | smithi | 7061ea6 | 8 | 36 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 7912346 | 2024-09-19 07:50:44 | 2024-09-19 07:52:35 | 2024-09-19 08:09:33 | 0:16:58 | 0:07:07 | 0:09:51 | smithi | main | centos | 9.stream | rados/cephadm/workunits/{0-distro/centos_9.stream agent/off mon_election/classic task/test_cephadm} | 1 | |
Failure Reason:
Command failed (workunit test cephadm/test_cephadm.sh) on smithi148 with status 125: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=7061ea6ca50908d104e9f3979dc615e118d9e153 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh' |
||||||||||||||
fail | 7912347 | 2024-09-19 07:50:45 | 2024-09-19 07:52:36 | 2024-09-19 08:08:41 | 0:16:05 | 0:03:51 | 0:12:14 | smithi | main | centos | 9.stream | rados/upgrade/parallel/{0-random-distro$/{centos_9.stream_runc} 0-start 1-tasks mon_election/classic upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} | 2 | |
Failure Reason:
Failed to fetch package version from https://shaman.ceph.com/api/search/?status=ready&project=ceph&flavor=default&distros=centos%2F9%2Fx86_64&ref=pacific |
||||||||||||||
pass | 7912348 | 2024-09-19 07:50:47 | 2024-09-19 07:55:26 | 2024-09-19 08:20:06 | 0:24:40 | 0:15:03 | 0:09:37 | smithi | main | centos | 9.stream | rados/valgrind-leaks/{1-start 2-inject-leak/mon centos_latest} | 1 | |
fail | 7912349 | 2024-09-19 07:50:48 | 2024-09-19 07:55:27 | 2024-09-19 08:08:29 | 0:13:02 | 0:03:48 | 0:09:14 | smithi | main | centos | 9.stream | rados/cephadm/osds/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-ops/repave-all} | 2 | |
Failure Reason:
Command failed on smithi083 with status 234: 'sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1 && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/attr_allow_any_host && sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1 && echo /dev/vg_nvme/lv_1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/device_path && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/enable && sudo ln -s /sys/kernel/config/nvmet/subsystems/lv_1 /sys/kernel/config/nvmet/ports/1/subsystems/lv_1 && sudo nvme connect -t loop -n lv_1 -q hostnqn' |
||||||||||||||
fail | 7912350 | 2024-09-19 07:50:49 | 2024-09-19 07:55:37 | 2024-09-19 08:08:47 | 0:13:10 | 0:03:52 | 0:09:18 | smithi | main | centos | 9.stream | rados/cephadm/smoke/{0-distro/centos_9.stream 0-nvme-loop agent/off fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
Command failed on smithi117 with status 234: 'sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1 && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/attr_allow_any_host && sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1 && echo /dev/vg_nvme/lv_1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/device_path && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/enable && sudo ln -s /sys/kernel/config/nvmet/subsystems/lv_1 /sys/kernel/config/nvmet/ports/1/subsystems/lv_1 && sudo nvme connect -t loop -n lv_1 -q hostnqn' |
||||||||||||||
fail | 7912351 | 2024-09-19 07:50:51 | 2024-09-19 07:55:48 | 2024-09-19 08:08:27 | 0:12:39 | 0:04:49 | 0:07:50 | smithi | main | centos | 9.stream | rados/cephadm/workunits/{0-distro/centos_9.stream_runc agent/on mon_election/connectivity task/test_cephadm_repos} | 1 | |
Failure Reason:
Command failed (workunit test cephadm/test_repos.sh) on smithi052 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=7061ea6ca50908d104e9f3979dc615e118d9e153 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_repos.sh' |
||||||||||||||
fail | 7912352 | 2024-09-19 07:50:52 | 2024-09-19 07:55:48 | 2024-09-19 08:13:13 | 0:17:25 | 0:04:49 | 0:12:36 | smithi | main | centos | 9.stream | rados/cephadm/osds/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-ops/rm-zap-add} | 2 | |
Failure Reason:
['Data could not be sent to remote host "smithi070.front.sepia.ceph.com". Make sure this host can be reached over ssh: ssh: connect to host smithi070.front.sepia.ceph.com port 22: No route to host'] |
||||||||||||||
fail | 7912353 | 2024-09-19 07:50:53 | 2024-09-19 07:55:49 | 2024-09-19 08:14:52 | 0:19:03 | smithi | main | centos | 9.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-bitmap rados tasks/rados_api_tests validater/valgrind} | 2 | |||
Failure Reason:
Failed to reconnect to smithi037 |
||||||||||||||
fail | 7912354 | 2024-09-19 07:50:55 | 2024-09-19 07:59:49 | 2024-09-19 08:14:45 | 0:14:56 | 0:03:51 | 0:11:05 | smithi | main | centos | 9.stream | rados/cephadm/smoke/{0-distro/centos_9.stream_runc 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} | 2 | |
Failure Reason:
Command failed on smithi016 with status 234: 'sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1 && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/attr_allow_any_host && sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1 && echo /dev/vg_nvme/lv_1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/device_path && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/enable && sudo ln -s /sys/kernel/config/nvmet/subsystems/lv_1 /sys/kernel/config/nvmet/ports/1/subsystems/lv_1 && sudo nvme connect -t loop -n lv_1 -q hostnqn' |
||||||||||||||
fail | 7912355 | 2024-09-19 07:50:56 | 2024-09-19 08:01:00 | 2024-09-19 08:22:04 | 0:21:04 | 0:11:39 | 0:09:25 | smithi | main | centos | 9.stream | rados/cephadm/workunits/{0-distro/ubuntu_20.04 agent/on mon_election/connectivity task/test_iscsi_container/{centos_9.stream test_iscsi_container}} | 1 | |
Failure Reason:
Command failed (workunit test cephadm/test_iscsi_pids_limit.sh) on smithi146 with status 2: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=7061ea6ca50908d104e9f3979dc615e118d9e153 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_iscsi_pids_limit.sh' |
||||||||||||||
fail | 7912356 | 2024-09-19 07:50:57 | 2024-09-19 08:01:01 | 2024-09-19 08:14:51 | 0:13:50 | 0:03:49 | 0:10:01 | smithi | main | centos | 9.stream | rados/cephadm/osds/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-ops/rm-zap-wait} | 2 | |
Failure Reason:
Command failed on smithi103 with status 234: 'sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1 && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/attr_allow_any_host && sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1 && echo /dev/vg_nvme/lv_1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/device_path && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/enable && sudo ln -s /sys/kernel/config/nvmet/subsystems/lv_1 /sys/kernel/config/nvmet/ports/1/subsystems/lv_1 && sudo nvme connect -t loop -n lv_1 -q hostnqn' |
||||||||||||||
fail | 7912357 | 2024-09-19 07:50:58 | 2024-09-19 08:01:21 | 2024-09-19 08:22:25 | 0:21:04 | 0:12:40 | 0:08:24 | smithi | main | centos | 9.stream | rados/dashboard/{0-single-container-host debug/mgr mon_election/connectivity random-objectstore$/{bluestore-comp-snappy} tasks/e2e} | 2 | |
Failure Reason:
Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi134 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=7061ea6ca50908d104e9f3979dc615e118d9e153 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh' |
||||||||||||||
fail | 7912358 | 2024-09-19 07:51:00 | 2024-09-19 08:01:21 | 2024-09-19 08:19:51 | 0:18:30 | 0:08:07 | 0:10:23 | smithi | main | centos | 9.stream | rados/singleton-nomsgr/{all/lazy_omap_stats_output mon_election/classic rados supported-random-distro$/{centos_latest}} | 1 | |
Failure Reason:
Command crashed: 'sudo TESTDIR=/home/ubuntu/cephtest bash -c ceph_test_lazy_omap_stats' |
||||||||||||||
fail | 7912359 | 2024-09-19 07:51:01 | 2024-09-19 08:02:52 | 2024-09-19 08:41:42 | 0:38:50 | 0:29:33 | 0:09:17 | smithi | main | centos | 9.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-snappy rados tasks/mon_recovery validater/valgrind} | 2 | |
Failure Reason:
saw valgrind issues |
||||||||||||||
fail | 7912360 | 2024-09-19 07:51:02 | 2024-09-19 08:03:02 | 2024-09-19 08:17:01 | 0:13:59 | 0:03:52 | 0:10:07 | smithi | main | centos | 9.stream | rados/cephadm/osds/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-ops/rmdir-reactivate} | 2 | |
Failure Reason:
Command failed on smithi123 with status 234: 'sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1 && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/attr_allow_any_host && sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1 && echo /dev/vg_nvme/lv_1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/device_path && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/enable && sudo ln -s /sys/kernel/config/nvmet/subsystems/lv_1 /sys/kernel/config/nvmet/ports/1/subsystems/lv_1 && sudo nvme connect -t loop -n lv_1 -q hostnqn' |
||||||||||||||
fail | 7912361 | 2024-09-19 07:51:04 | 2024-09-19 08:04:13 | 2024-09-19 08:31:28 | 0:27:15 | 0:17:16 | 0:09:59 | smithi | main | centos | 9.stream | rados/valgrind-leaks/{1-start 2-inject-leak/none centos_latest} | 1 | |
Failure Reason:
saw valgrind issues |
||||||||||||||
pass | 7912362 | 2024-09-19 07:51:05 | 2024-09-19 08:04:13 | 2024-09-19 08:53:18 | 0:49:05 | 0:39:52 | 0:09:13 | smithi | main | centos | 9.stream | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/osd-delay objectstore/bluestore-bitmap rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{centos_latest} thrashers/minsize_recovery thrashosds-health workloads/ec-small-objects-balanced} | 2 | |
pass | 7912363 | 2024-09-19 07:51:06 | 2024-09-19 08:04:13 | 2024-09-19 08:48:06 | 0:43:53 | 0:33:24 | 0:10:29 | smithi | main | ubuntu | 20.04 | rados/singleton-nomsgr/{all/osd_stale_reads mon_election/classic rados supported-random-distro$/{ubuntu_latest}} | 1 | |
fail | 7912364 | 2024-09-19 07:51:08 | 2024-09-19 08:04:14 | 2024-09-19 08:49:25 | 0:45:11 | 0:35:02 | 0:10:09 | smithi | main | ubuntu | 20.04 | rados/thrash-old-clients/{0-distro$/{ubuntu_20.04} 0-size-min-size-overrides/2-size-2-min-size 1-install/octopus backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/on mon_election/classic msgr-failures/fastclose rados thrashers/mapgap thrashosds-health workloads/rbd_cls} | 3 | |
Failure Reason:
Command failed on smithi073 with status 1: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'ceph_test_cls_rbd --gtest_filter=-TestClsRbd.get_features:TestClsRbd.parents:TestClsRbd.mirror'" |
||||||||||||||
fail | 7912365 | 2024-09-19 07:51:09 | 2024-09-19 08:05:24 | 2024-09-19 08:18:54 | 0:13:30 | 0:03:52 | 0:09:38 | smithi | main | centos | 9.stream | rados/cephadm/smoke/{0-distro/centos_9.stream 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} | 2 | |
Failure Reason:
Command failed on smithi007 with status 234: 'sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1 && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/attr_allow_any_host && sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1 && echo /dev/vg_nvme/lv_1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/device_path && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/enable && sudo ln -s /sys/kernel/config/nvmet/subsystems/lv_1 /sys/kernel/config/nvmet/ports/1/subsystems/lv_1 && sudo nvme connect -t loop -n lv_1 -q hostnqn' |
||||||||||||||
fail | 7912366 | 2024-09-19 07:51:10 | 2024-09-19 08:06:05 | 2024-09-19 08:19:08 | 0:13:03 | 0:03:54 | 0:09:09 | smithi | main | centos | 9.stream | rados/cephadm/osds/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-ops/rm-zap-add} | 2 | |
Failure Reason:
Command failed on smithi044 with status 234: 'sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1 && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/attr_allow_any_host && sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1 && echo /dev/vg_nvme/lv_1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/device_path && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/enable && sudo ln -s /sys/kernel/config/nvmet/subsystems/lv_1 /sys/kernel/config/nvmet/ports/1/subsystems/lv_1 && sudo nvme connect -t loop -n lv_1 -q hostnqn' |
||||||||||||||
fail | 7912367 | 2024-09-19 07:51:12 | 2024-09-19 08:06:16 | 2024-09-19 08:25:21 | 0:19:05 | 0:08:34 | 0:10:31 | smithi | main | ubuntu | 20.04 | rados/cephadm/workunits/{0-distro/ubuntu_20.04 agent/off mon_election/connectivity task/test_cephadm} | 1 | |
Failure Reason:
Command failed (workunit test cephadm/test_cephadm.sh) on smithi076 with status 125: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=7061ea6ca50908d104e9f3979dc615e118d9e153 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh' |
||||||||||||||
fail | 7912368 | 2024-09-19 07:51:13 | 2024-09-19 08:06:57 | 2024-09-19 08:20:10 | 0:13:13 | 0:03:55 | 0:09:18 | smithi | main | centos | 9.stream | rados/cephadm/osds/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-ops/rm-zap-flag} | 2 | |
Failure Reason:
Command failed on smithi137 with status 234: 'sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1 && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/attr_allow_any_host && sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1 && echo /dev/vg_nvme/lv_1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/device_path && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/enable && sudo ln -s /sys/kernel/config/nvmet/subsystems/lv_1 /sys/kernel/config/nvmet/ports/1/subsystems/lv_1 && sudo nvme connect -t loop -n lv_1 -q hostnqn' |
||||||||||||||
fail | 7912369 | 2024-09-19 07:51:14 | 2024-09-19 08:06:57 | 2024-09-19 08:20:41 | 0:13:44 | 0:04:42 | 0:09:02 | smithi | main | centos | 9.stream | rados/cephadm/workunits/{0-distro/centos_9.stream agent/on mon_election/classic task/test_cephadm_repos} | 1 | |
Failure Reason:
Command failed (workunit test cephadm/test_repos.sh) on smithi170 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=7061ea6ca50908d104e9f3979dc615e118d9e153 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_repos.sh' |
||||||||||||||
fail | 7912370 | 2024-09-19 07:51:16 | 2024-09-19 08:25:42 | 455 | smithi | main | ubuntu | 20.04 | rados/upgrade/parallel/{0-random-distro$/{ubuntu_20.04} 0-start 1-tasks mon_election/connectivity upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} | 2 | ||||
Failure Reason:
Command failed on smithi033 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:pacific pull' |
||||||||||||||
fail | 7912371 | 2024-09-19 07:51:17 | 2024-09-19 08:07:48 | 2024-09-19 10:20:59 | 2:13:11 | 2:02:47 | 0:10:24 | smithi | main | centos | 9.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-zstd rados tasks/rados_cls_all validater/valgrind} | 2 | |
Failure Reason:
saw valgrind issues |
||||||||||||||
fail | 7912372 | 2024-09-19 07:51:18 | 2024-09-19 08:07:49 | 2024-09-19 08:20:57 | 0:13:08 | 0:03:55 | 0:09:13 | smithi | main | centos | 9.stream | rados/cephadm/smoke/{0-distro/centos_9.stream_runc 0-nvme-loop agent/off fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
Command failed on smithi164 with status 234: 'sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1 && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/attr_allow_any_host && sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1 && echo /dev/vg_nvme/lv_1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/device_path && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/enable && sudo ln -s /sys/kernel/config/nvmet/subsystems/lv_1 /sys/kernel/config/nvmet/ports/1/subsystems/lv_1 && sudo nvme connect -t loop -n lv_1 -q hostnqn' |
||||||||||||||
pass | 7912373 | 2024-09-19 07:51:19 | 2024-09-19 08:07:59 | 2024-09-19 09:37:55 | 1:29:56 | 1:19:19 | 0:10:37 | smithi | main | ubuntu | 20.04 | rados/standalone/{supported-random-distro$/{ubuntu_latest} workloads/mon} | 1 | |
pass | 7912374 | 2024-09-19 07:51:21 | 2024-09-19 08:07:59 | 2024-09-19 08:37:41 | 0:29:42 | 0:19:02 | 0:10:40 | smithi | main | centos | 9.stream | rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/few rados recovery-overrides/{more-active-recovery} supported-random-distro$/{centos_latest} thrashers/fastread thrashosds-health workloads/ec-small-objects-overwrites} | 2 | |
fail | 7912375 | 2024-09-19 07:51:22 | 2024-09-19 08:08:40 | 2024-09-19 08:21:58 | 0:13:18 | 0:03:54 | 0:09:24 | smithi | main | centos | 9.stream | rados/cephadm/osds/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-ops/rmdir-reactivate} | 2 | |
Failure Reason:
Command failed on smithi052 with status 234: 'sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1 && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/attr_allow_any_host && sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1 && echo /dev/vg_nvme/lv_1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/device_path && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/enable && sudo ln -s /sys/kernel/config/nvmet/subsystems/lv_1 /sys/kernel/config/nvmet/ports/1/subsystems/lv_1 && sudo nvme connect -t loop -n lv_1 -q hostnqn' |
||||||||||||||
pass | 7912376 | 2024-09-19 07:51:23 | 2024-09-19 08:09:00 | 2024-09-19 08:36:26 | 0:27:26 | 0:18:31 | 0:08:55 | smithi | main | centos | 9.stream | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/few objectstore/bluestore-comp-zlib rados recovery-overrides/{default} supported-random-distro$/{centos_latest} thrashers/careful thrashosds-health workloads/ec-small-objects} | 2 | |
pass | 7912377 | 2024-09-19 07:51:25 | 2024-09-19 08:09:21 | 2024-09-19 08:34:10 | 0:24:49 | 0:15:53 | 0:08:56 | smithi | main | centos | 9.stream | rados/valgrind-leaks/{1-start 2-inject-leak/osd centos_latest} | 1 | |
fail | 7912378 | 2024-09-19 07:51:26 | 2024-09-19 08:09:21 | 2024-09-19 08:30:13 | 0:20:52 | 0:11:26 | 0:09:26 | smithi | main | centos | 9.stream | rados/cephadm/workunits/{0-distro/centos_9.stream_runc agent/on mon_election/classic task/test_iscsi_container/{centos_9.stream test_iscsi_container}} | 1 | |
Failure Reason:
Command failed (workunit test cephadm/test_iscsi_pids_limit.sh) on smithi092 with status 2: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=7061ea6ca50908d104e9f3979dc615e118d9e153 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_iscsi_pids_limit.sh' |
||||||||||||||
fail | 7912379 | 2024-09-19 07:51:27 | 2024-09-19 08:09:21 | 2024-09-19 08:24:19 | 0:14:58 | 0:03:50 | 0:11:08 | smithi | main | centos | 9.stream | rados/cephadm/osds/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-ops/repave-all} | 2 | |
Failure Reason:
Command failed on smithi084 with status 234: 'sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1 && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/attr_allow_any_host && sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1 && echo /dev/vg_nvme/lv_1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/device_path && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/enable && sudo ln -s /sys/kernel/config/nvmet/subsystems/lv_1 /sys/kernel/config/nvmet/ports/1/subsystems/lv_1 && sudo nvme connect -t loop -n lv_1 -q hostnqn' |
||||||||||||||
fail | 7912380 | 2024-09-19 07:51:28 | 2024-09-19 08:10:32 | 2024-09-19 14:43:40 | 6:33:08 | 6:21:05 | 0:12:03 | smithi | main | centos | 9.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-low-osd-mem-target rados tasks/rados_api_tests validater/valgrind} | 2 | |
Failure Reason:
Command failed (workunit test rados/test.sh) on smithi003 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=7061ea6ca50908d104e9f3979dc615e118d9e153 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 ALLOW_TIMEOUTS=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh' |
||||||||||||||
fail | 7912381 | 2024-09-19 07:51:30 | 2024-09-19 08:12:03 | 2024-09-19 11:06:21 | 2:54:18 | 2:27:09 | 0:27:09 | smithi | main | ubuntu | 20.04 | rados/objectstore/{backends/objectstore-bluestore-a supported-random-distro$/{ubuntu_latest}} | 1 | |
Failure Reason:
Command failed on smithi071 with status 1: 'sudo TESTDIR=/home/ubuntu/cephtest bash -c \'mkdir $TESTDIR/archive/ostest && cd $TESTDIR/archive/ostest && ulimit -Sn 16384 && CEPH_ARGS="--no-log-to-stderr --log-file $TESTDIR/archive/ceph_test_objectstore.log --debug-bluestore 20" ceph_test_objectstore --gtest_filter=*/2:-*SyntheticMatrixC* --gtest_catch_exceptions=0\'' |
||||||||||||||
fail | 7912382 | 2024-09-19 07:51:31 | 2024-09-19 08:12:13 | 2024-09-19 08:36:14 | 0:24:01 | 0:13:21 | 0:10:40 | smithi | main | centos | 9.stream | rados/dashboard/{0-single-container-host debug/mgr mon_election/classic random-objectstore$/{bluestore-comp-zstd} tasks/e2e} | 2 | |
Failure Reason:
Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi120 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=7061ea6ca50908d104e9f3979dc615e118d9e153 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh' |
||||||||||||||
fail | 7912383 | 2024-09-19 07:51:32 | 2024-09-19 08:13:13 | 2024-09-19 08:26:10 | 0:12:57 | 0:03:49 | 0:09:08 | smithi | main | centos | 9.stream | rados/cephadm/smoke/{0-distro/centos_9.stream 0-nvme-loop agent/on fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
Command failed on smithi019 with status 234: 'sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1 && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/attr_allow_any_host && sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1 && echo /dev/vg_nvme/lv_1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/device_path && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/enable && sudo ln -s /sys/kernel/config/nvmet/subsystems/lv_1 /sys/kernel/config/nvmet/ports/1/subsystems/lv_1 && sudo nvme connect -t loop -n lv_1 -q hostnqn' |
||||||||||||||
fail | 7912384 | 2024-09-19 07:51:33 | 2024-09-19 08:13:14 | 2024-09-19 08:26:21 | 0:13:07 | 0:03:53 | 0:09:14 | smithi | main | centos | 9.stream | rados/cephadm/osds/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-ops/rm-zap-flag} | 2 | |
Failure Reason:
Command failed on smithi070 with status 234: 'sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1 && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/attr_allow_any_host && sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1 && echo /dev/vg_nvme/lv_1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/device_path && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/enable && sudo ln -s /sys/kernel/config/nvmet/subsystems/lv_1 /sys/kernel/config/nvmet/ports/1/subsystems/lv_1 && sudo nvme connect -t loop -n lv_1 -q hostnqn' |
||||||||||||||
fail | 7912385 | 2024-09-19 07:51:35 | 2024-09-19 08:13:24 | 2024-09-19 08:26:28 | 0:13:04 | 0:03:53 | 0:09:11 | smithi | main | centos | 9.stream | rados/cephadm/osds/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-ops/rm-zap-wait} | 2 | |
Failure Reason:
Command failed on smithi107 with status 234: 'sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1 && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/attr_allow_any_host && sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1 && echo /dev/vg_nvme/lv_1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/device_path && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/enable && sudo ln -s /sys/kernel/config/nvmet/subsystems/lv_1 /sys/kernel/config/nvmet/ports/1/subsystems/lv_1 && sudo nvme connect -t loop -n lv_1 -q hostnqn' |
||||||||||||||
fail | 7912386 | 2024-09-19 07:51:36 | 2024-09-19 08:13:24 | 2024-09-19 08:52:57 | 0:39:33 | 0:29:24 | 0:10:09 | smithi | main | centos | 9.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v1only objectstore/filestore-xfs rados tasks/mon_recovery validater/valgrind} | 2 | |
Failure Reason:
saw valgrind issues |
||||||||||||||
pass | 7912387 | 2024-09-19 07:51:37 | 2024-09-19 08:14:05 | 2024-09-19 08:37:14 | 0:23:09 | 0:14:32 | 0:08:37 | smithi | main | centos | 9.stream | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{centos_latest} tasks/rados_cls_all} | 2 | |
fail | 7912388 | 2024-09-19 07:51:39 | 2024-09-19 08:14:15 | 2024-09-19 08:28:27 | 0:14:12 | 0:03:52 | 0:10:20 | smithi | main | centos | 9.stream | rados/cephadm/smoke/{0-distro/centos_9.stream_runc 0-nvme-loop agent/off fixed-2 mon_election/connectivity start} | 2 | |
Failure Reason:
Command failed on smithi103 with status 234: 'sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1 && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/attr_allow_any_host && sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1 && echo /dev/vg_nvme/lv_1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/device_path && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/enable && sudo ln -s /sys/kernel/config/nvmet/subsystems/lv_1 /sys/kernel/config/nvmet/ports/1/subsystems/lv_1 && sudo nvme connect -t loop -n lv_1 -q hostnqn' |
||||||||||||||
fail | 7912389 | 2024-09-19 07:51:41 | 2024-09-19 08:15:06 | 2024-09-19 08:31:45 | 0:16:39 | 0:07:09 | 0:09:30 | smithi | main | centos | 9.stream | rados/cephadm/workunits/{0-distro/centos_9.stream_runc agent/off mon_election/connectivity task/test_cephadm} | 1 | |
Failure Reason:
Command failed (workunit test cephadm/test_cephadm.sh) on smithi016 with status 125: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=7061ea6ca50908d104e9f3979dc615e118d9e153 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh' |