User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
yuriw | 2024-08-09 13:33:37 | 2024-08-09 14:35:09 | 2024-08-09 23:04:25 | 8:29:16 | rados | wip-yuri3-testing-2024-08-07-1435-quincy | smithi | 434008a | 4 | 35 | 4 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 7845550 | 2024-08-09 13:34:53 | 2024-08-09 14:35:09 | 2024-08-09 14:50:14 | 0:15:05 | 0:04:40 | 0:10:25 | smithi | main | centos | 9.stream | rados/cephadm/osds/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-ops/rm-zap-flag} | 2 | |
Failure Reason:
Command failed on smithi037 with status 234: 'sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1 && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/attr_allow_any_host && sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1 && echo /dev/vg_nvme/lv_1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/device_path && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/enable && sudo ln -s /sys/kernel/config/nvmet/subsystems/lv_1 /sys/kernel/config/nvmet/ports/1/subsystems/lv_1 && sudo nvme connect -t loop -n lv_1 -q hostnqn' |
||||||||||||||
fail | 7845551 | 2024-08-09 13:34:54 | 2024-08-09 14:37:39 | 2024-08-09 14:53:07 | 0:15:28 | 0:04:35 | 0:10:53 | smithi | main | centos | 9.stream | rados/cephadm/osds/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-ops/rm-zap-wait} | 2 | |
Failure Reason:
Command failed on smithi092 with status 234: 'sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1 && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/attr_allow_any_host && sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1 && echo /dev/vg_nvme/lv_1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/device_path && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/enable && sudo ln -s /sys/kernel/config/nvmet/subsystems/lv_1 /sys/kernel/config/nvmet/ports/1/subsystems/lv_1 && sudo nvme connect -t loop -n lv_1 -q hostnqn' |
||||||||||||||
pass | 7845552 | 2024-08-09 13:34:55 | 2024-08-09 14:37:50 | 2024-08-09 15:16:48 | 0:38:58 | 0:27:44 | 0:11:14 | smithi | main | ubuntu | 20.04 | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{ubuntu_latest} tasks/rados_api_tests} | 2 | |
fail | 7845553 | 2024-08-09 13:34:57 | 2024-08-09 14:38:30 | 2024-08-09 15:18:01 | 0:39:31 | 0:30:36 | 0:08:55 | smithi | main | centos | 9.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-stupid rados tasks/mon_recovery validater/valgrind} | 2 | |
Failure Reason:
saw valgrind issues |
||||||||||||||
fail | 7845554 | 2024-08-09 13:34:58 | 2024-08-09 14:39:01 | 2024-08-09 14:56:04 | 0:17:03 | 0:08:02 | 0:09:01 | smithi | main | centos | 9.stream | rados/cephadm/workunits/{0-distro/centos_9.stream agent/off mon_election/classic task/test_cephadm} | 1 | |
Failure Reason:
Command failed (workunit test cephadm/test_cephadm.sh) on smithi148 with status 125: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=434008a32f159a96b6c2c810cb953cfa0b51d571 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh' |
||||||||||||||
fail | 7845555 | 2024-08-09 13:34:59 | 2024-08-09 14:39:01 | 2024-08-09 14:57:42 | 0:18:41 | 0:07:24 | 0:11:17 | smithi | main | ubuntu | 20.04 | rados/upgrade/parallel/{0-random-distro$/{ubuntu_20.04} 0-start 1-tasks mon_election/classic upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} | 2 | |
Failure Reason:
Command failed on smithi064 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:pacific pull' |
||||||||||||||
fail | 7845556 | 2024-08-09 13:35:00 | 2024-08-09 14:39:41 | 2024-08-09 14:52:59 | 0:13:18 | 0:04:45 | 0:08:33 | smithi | main | centos | 9.stream | rados/cephadm/osds/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-ops/repave-all} | 2 | |
Failure Reason:
Command failed on smithi110 with status 234: 'sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1 && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/attr_allow_any_host && sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1 && echo /dev/vg_nvme/lv_1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/device_path && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/enable && sudo ln -s /sys/kernel/config/nvmet/subsystems/lv_1 /sys/kernel/config/nvmet/ports/1/subsystems/lv_1 && sudo nvme connect -t loop -n lv_1 -q hostnqn' |
||||||||||||||
fail | 7845557 | 2024-08-09 13:35:02 | 2024-08-09 14:40:02 | 2024-08-09 14:56:21 | 0:16:19 | 0:04:59 | 0:11:20 | smithi | main | centos | 9.stream | rados/cephadm/smoke/{0-distro/centos_9.stream 0-nvme-loop agent/off fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
Command failed on smithi029 with status 234: 'sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1 && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/attr_allow_any_host && sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1 && echo /dev/vg_nvme/lv_1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/device_path && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/enable && sudo ln -s /sys/kernel/config/nvmet/subsystems/lv_1 /sys/kernel/config/nvmet/ports/1/subsystems/lv_1 && sudo nvme connect -t loop -n lv_1 -q hostnqn' |
||||||||||||||
pass | 7845558 | 2024-08-09 13:35:03 | 2024-08-09 14:41:02 | 2024-08-09 15:05:13 | 0:24:11 | 0:14:44 | 0:09:27 | smithi | main | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{default} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{ubuntu_latest} thrashers/morepggrow thrashosds-health workloads/set-chunks-read} | 2 | |
fail | 7845559 | 2024-08-09 13:35:04 | 2024-08-09 14:41:13 | 2024-08-09 14:57:44 | 0:16:31 | 0:05:19 | 0:11:12 | smithi | main | centos | 9.stream | rados/cephadm/workunits/{0-distro/centos_9.stream_runc agent/on mon_election/connectivity task/test_cephadm_repos} | 1 | |
Failure Reason:
Command failed (workunit test cephadm/test_repos.sh) on smithi203 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=434008a32f159a96b6c2c810cb953cfa0b51d571 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_repos.sh' |
||||||||||||||
fail | 7845560 | 2024-08-09 13:35:05 | 2024-08-09 14:42:23 | 2024-08-09 14:55:27 | 0:13:04 | 0:04:37 | 0:08:27 | smithi | main | centos | 9.stream | rados/cephadm/osds/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-ops/rm-zap-add} | 2 | |
Failure Reason:
Command failed on smithi070 with status 234: 'sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1 && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/attr_allow_any_host && sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1 && echo /dev/vg_nvme/lv_1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/device_path && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/enable && sudo ln -s /sys/kernel/config/nvmet/subsystems/lv_1 /sys/kernel/config/nvmet/ports/1/subsystems/lv_1 && sudo nvme connect -t loop -n lv_1 -q hostnqn' |
||||||||||||||
dead | 7845561 | 2024-08-09 13:35:07 | 2024-08-09 14:42:33 | 2024-08-09 22:51:45 | 8:09:12 | smithi | main | ubuntu | 20.04 | rados/thrash-old-clients/{0-distro$/{ubuntu_20.04} 0-size-min-size-overrides/2-size-2-min-size 1-install/octopus backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/on mon_election/classic msgr-failures/few rados thrashers/careful thrashosds-health workloads/cache-snaps} | 3 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 7845562 | 2024-08-09 13:35:08 | 2024-08-09 14:42:54 | 2024-08-09 15:49:37 | 1:06:43 | 0:55:28 | 0:11:15 | smithi | main | centos | 9.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-bitmap rados tasks/rados_api_tests validater/valgrind} | 2 | |
Failure Reason:
saw valgrind issues |
||||||||||||||
fail | 7845563 | 2024-08-09 13:35:09 | 2024-08-09 14:44:04 | 2024-08-09 15:01:18 | 0:17:14 | 0:04:44 | 0:12:30 | smithi | main | centos | 9.stream | rados/cephadm/smoke/{0-distro/centos_9.stream_runc 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} | 2 | |
Failure Reason:
Command failed on smithi040 with status 234: 'sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1 && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/attr_allow_any_host && sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1 && echo /dev/vg_nvme/lv_1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/device_path && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/enable && sudo ln -s /sys/kernel/config/nvmet/subsystems/lv_1 /sys/kernel/config/nvmet/ports/1/subsystems/lv_1 && sudo nvme connect -t loop -n lv_1 -q hostnqn' |
||||||||||||||
fail | 7845564 | 2024-08-09 13:35:10 | 2024-08-09 14:46:15 | 2024-08-09 15:09:48 | 0:23:33 | 0:12:11 | 0:11:22 | smithi | main | centos | 9.stream | rados/cephadm/workunits/{0-distro/ubuntu_20.04 agent/on mon_election/connectivity task/test_iscsi_container/{centos_9.stream test_iscsi_container}} | 1 | |
Failure Reason:
Command failed (workunit test cephadm/test_iscsi_pids_limit.sh) on smithi169 with status 2: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=434008a32f159a96b6c2c810cb953cfa0b51d571 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_iscsi_pids_limit.sh' |
||||||||||||||
fail | 7845565 | 2024-08-09 13:35:12 | 2024-08-09 14:46:35 | 2024-08-09 15:02:30 | 0:15:55 | 0:04:39 | 0:11:16 | smithi | main | centos | 9.stream | rados/cephadm/osds/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-ops/rm-zap-wait} | 2 | |
Failure Reason:
Command failed on smithi002 with status 234: 'sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1 && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/attr_allow_any_host && sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1 && echo /dev/vg_nvme/lv_1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/device_path && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/enable && sudo ln -s /sys/kernel/config/nvmet/subsystems/lv_1 /sys/kernel/config/nvmet/ports/1/subsystems/lv_1 && sudo nvme connect -t loop -n lv_1 -q hostnqn' |
||||||||||||||
fail | 7845566 | 2024-08-09 13:35:13 | 2024-08-09 14:47:27 | 2024-08-09 15:10:33 | 0:23:06 | 0:13:07 | 0:09:59 | smithi | main | centos | 9.stream | rados/dashboard/{0-single-container-host debug/mgr mon_election/connectivity random-objectstore$/{bluestore-comp-zlib} tasks/e2e} | 2 | |
Failure Reason:
Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi150 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=434008a32f159a96b6c2c810cb953cfa0b51d571 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh' |
||||||||||||||
dead | 7845567 | 2024-08-09 13:35:14 | 2024-08-09 14:47:27 | 2024-08-09 22:57:26 | 8:09:59 | smithi | main | ubuntu | 20.04 | rados/thrash-old-clients/{0-distro$/{ubuntu_20.04} 0-size-min-size-overrides/3-size-2-min-size 1-install/pacific backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat mon_election/connectivity msgr-failures/osd-delay rados thrashers/default thrashosds-health workloads/radosbench} | 3 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 7845568 | 2024-08-09 13:35:15 | 2024-08-09 14:48:38 | 2024-08-09 15:08:47 | 0:20:09 | 0:08:53 | 0:11:16 | smithi | main | centos | 9.stream | rados/singleton-nomsgr/{all/lazy_omap_stats_output mon_election/classic rados supported-random-distro$/{centos_latest}} | 1 | |
Failure Reason:
Command crashed: 'sudo TESTDIR=/home/ubuntu/cephtest bash -c ceph_test_lazy_omap_stats' |
||||||||||||||
fail | 7845569 | 2024-08-09 13:35:17 | 2024-08-09 14:49:48 | 2024-08-09 15:29:10 | 0:39:22 | 0:29:39 | 0:09:43 | smithi | main | centos | 9.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-snappy rados tasks/mon_recovery validater/valgrind} | 2 | |
Failure Reason:
saw valgrind issues |
||||||||||||||
fail | 7845570 | 2024-08-09 13:35:18 | 2024-08-09 15:05:57 | 292 | smithi | main | centos | 9.stream | rados/cephadm/osds/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-ops/rmdir-reactivate} | 2 | ||||
Failure Reason:
Command failed on smithi191 with status 234: 'sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1 && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/attr_allow_any_host && sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1 && echo /dev/vg_nvme/lv_1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/device_path && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/enable && sudo ln -s /sys/kernel/config/nvmet/subsystems/lv_1 /sys/kernel/config/nvmet/ports/1/subsystems/lv_1 && sudo nvme connect -t loop -n lv_1 -q hostnqn' |
||||||||||||||
pass | 7845571 | 2024-08-09 13:35:19 | 2024-08-09 14:50:59 | 2024-08-09 15:27:56 | 0:36:57 | 0:26:54 | 0:10:03 | smithi | main | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-dispatch-delay msgr/async objectstore/filestore-xfs rados supported-random-distro$/{ubuntu_latest} thrashers/mapgap thrashosds-health workloads/cache-pool-snaps-readproxy} | 2 | |
fail | 7845572 | 2024-08-09 13:35:20 | 2024-08-09 14:51:59 | 2024-08-09 15:18:36 | 0:26:37 | 0:16:39 | 0:09:58 | smithi | main | centos | 9.stream | rados/valgrind-leaks/{1-start 2-inject-leak/none centos_latest} | 1 | |
Failure Reason:
saw valgrind issues |
||||||||||||||
dead | 7845573 | 2024-08-09 13:35:22 | 2024-08-09 14:52:00 | 2024-08-09 23:02:12 | 8:10:12 | smithi | main | ubuntu | 20.04 | rados/thrash-old-clients/{0-distro$/{ubuntu_20.04} 0-size-min-size-overrides/2-size-2-min-size 1-install/octopus backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/on mon_election/classic msgr-failures/fastclose rados thrashers/mapgap thrashosds-health workloads/rbd_cls} | 3 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 7845574 | 2024-08-09 13:35:23 | 2024-08-09 14:52:10 | 2024-08-09 15:05:24 | 0:13:14 | 0:04:40 | 0:08:34 | smithi | main | centos | 9.stream | rados/cephadm/smoke/{0-distro/centos_9.stream 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} | 2 | |
Failure Reason:
Command failed on smithi079 with status 234: 'sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1 && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/attr_allow_any_host && sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1 && echo /dev/vg_nvme/lv_1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/device_path && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/enable && sudo ln -s /sys/kernel/config/nvmet/subsystems/lv_1 /sys/kernel/config/nvmet/ports/1/subsystems/lv_1 && sudo nvme connect -t loop -n lv_1 -q hostnqn' |
||||||||||||||
fail | 7845575 | 2024-08-09 13:35:24 | 2024-08-09 14:52:31 | 2024-08-09 15:06:05 | 0:13:34 | 0:04:45 | 0:08:49 | smithi | main | centos | 9.stream | rados/cephadm/osds/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-ops/rm-zap-add} | 2 | |
Failure Reason:
Command failed on smithi032 with status 234: 'sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1 && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/attr_allow_any_host && sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1 && echo /dev/vg_nvme/lv_1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/device_path && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/enable && sudo ln -s /sys/kernel/config/nvmet/subsystems/lv_1 /sys/kernel/config/nvmet/ports/1/subsystems/lv_1 && sudo nvme connect -t loop -n lv_1 -q hostnqn' |
||||||||||||||
fail | 7845576 | 2024-08-09 13:35:25 | 2024-08-09 14:52:31 | 2024-08-09 15:10:10 | 0:17:39 | 0:08:28 | 0:09:11 | smithi | main | ubuntu | 20.04 | rados/cephadm/workunits/{0-distro/ubuntu_20.04 agent/off mon_election/connectivity task/test_cephadm} | 1 | |
Failure Reason:
Command failed (workunit test cephadm/test_cephadm.sh) on smithi084 with status 125: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=434008a32f159a96b6c2c810cb953cfa0b51d571 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh' |
||||||||||||||
fail | 7845577 | 2024-08-09 13:35:27 | 2024-08-09 14:52:31 | 2024-08-09 15:08:53 | 0:16:22 | 0:04:39 | 0:11:43 | smithi | main | centos | 9.stream | rados/cephadm/osds/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-ops/rm-zap-flag} | 2 | |
Failure Reason:
Command failed on smithi110 with status 234: 'sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1 && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/attr_allow_any_host && sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1 && echo /dev/vg_nvme/lv_1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/device_path && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/enable && sudo ln -s /sys/kernel/config/nvmet/subsystems/lv_1 /sys/kernel/config/nvmet/ports/1/subsystems/lv_1 && sudo nvme connect -t loop -n lv_1 -q hostnqn' |
||||||||||||||
fail | 7845578 | 2024-08-09 13:35:28 | 2024-08-09 14:53:12 | 2024-08-09 15:08:07 | 0:14:55 | 0:05:16 | 0:09:39 | smithi | main | centos | 9.stream | rados/cephadm/workunits/{0-distro/centos_9.stream agent/on mon_election/classic task/test_cephadm_repos} | 1 | |
Failure Reason:
Command failed (workunit test cephadm/test_repos.sh) on smithi131 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=434008a32f159a96b6c2c810cb953cfa0b51d571 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_repos.sh' |
||||||||||||||
fail | 7845579 | 2024-08-09 13:35:29 | 2024-08-09 14:53:12 | 2024-08-09 15:12:17 | 0:19:05 | 0:07:24 | 0:11:41 | smithi | main | ubuntu | 20.04 | rados/upgrade/parallel/{0-random-distro$/{ubuntu_20.04} 0-start 1-tasks mon_election/connectivity upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} | 2 | |
Failure Reason:
Command failed on smithi092 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:pacific pull' |
||||||||||||||
fail | 7845580 | 2024-08-09 13:35:31 | 2024-08-09 14:53:23 | 2024-08-09 17:11:09 | 2:17:46 | 2:07:47 | 0:09:59 | smithi | main | centos | 9.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-zstd rados tasks/rados_cls_all validater/valgrind} | 2 | |
Failure Reason:
saw valgrind issues |
||||||||||||||
fail | 7845581 | 2024-08-09 13:35:32 | 2024-08-09 14:53:53 | 2024-08-09 15:10:38 | 0:16:45 | 0:04:57 | 0:11:48 | smithi | main | centos | 9.stream | rados/cephadm/smoke/{0-distro/centos_9.stream_runc 0-nvme-loop agent/off fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
Command failed on smithi029 with status 234: 'sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1 && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/attr_allow_any_host && sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1 && echo /dev/vg_nvme/lv_1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/device_path && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/enable && sudo ln -s /sys/kernel/config/nvmet/subsystems/lv_1 /sys/kernel/config/nvmet/ports/1/subsystems/lv_1 && sudo nvme connect -t loop -n lv_1 -q hostnqn' |
||||||||||||||
dead | 7845582 | 2024-08-09 13:35:33 | 2024-08-09 14:55:24 | 2024-08-09 23:04:25 | 8:09:01 | smithi | main | ubuntu | 20.04 | rados/thrash-old-clients/{0-distro$/{ubuntu_20.04} 0-size-min-size-overrides/3-size-2-min-size 1-install/pacific backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/crush-compat mon_election/connectivity msgr-failures/few rados thrashers/morepggrow thrashosds-health workloads/snaps-few-objects} | 3 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 7845583 | 2024-08-09 13:35:35 | 2024-08-09 14:55:44 | 2024-08-09 15:28:28 | 0:32:44 | 0:22:49 | 0:09:55 | smithi | main | ubuntu | 20.04 | rados/standalone/{supported-random-distro$/{ubuntu_latest} workloads/mon} | 1 | |
Failure Reason:
Command failed (workunit test mon/mon-cluster-log.sh) on smithi148 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=434008a32f159a96b6c2c810cb953cfa0b51d571 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/mon/mon-cluster-log.sh' |
||||||||||||||
fail | 7845584 | 2024-08-09 13:35:36 | 2024-08-09 14:56:24 | 2024-08-09 15:11:08 | 0:14:44 | 0:04:40 | 0:10:04 | smithi | main | centos | 9.stream | rados/cephadm/osds/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-ops/rmdir-reactivate} | 2 | |
Failure Reason:
Command failed on smithi064 with status 234: 'sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1 && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/attr_allow_any_host && sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1 && echo /dev/vg_nvme/lv_1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/device_path && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/enable && sudo ln -s /sys/kernel/config/nvmet/subsystems/lv_1 /sys/kernel/config/nvmet/ports/1/subsystems/lv_1 && sudo nvme connect -t loop -n lv_1 -q hostnqn' |
||||||||||||||
fail | 7845585 | 2024-08-09 13:35:37 | 2024-08-09 14:57:55 | 2024-08-09 15:19:04 | 0:21:09 | 0:12:09 | 0:09:00 | smithi | main | centos | 9.stream | rados/cephadm/workunits/{0-distro/centos_9.stream_runc agent/on mon_election/classic task/test_iscsi_container/{centos_9.stream test_iscsi_container}} | 1 | |
Failure Reason:
Command failed (workunit test cephadm/test_iscsi_pids_limit.sh) on smithi107 with status 2: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=434008a32f159a96b6c2c810cb953cfa0b51d571 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_iscsi_pids_limit.sh' |
||||||||||||||
fail | 7845586 | 2024-08-09 13:35:38 | 2024-08-09 14:57:55 | 2024-08-09 15:13:43 | 0:15:48 | 0:04:40 | 0:11:08 | smithi | main | centos | 9.stream | rados/cephadm/osds/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-ops/repave-all} | 2 | |
Failure Reason:
Command failed on smithi178 with status 234: 'sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1 && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/attr_allow_any_host && sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1 && echo /dev/vg_nvme/lv_1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/device_path && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/enable && sudo ln -s /sys/kernel/config/nvmet/subsystems/lv_1 /sys/kernel/config/nvmet/ports/1/subsystems/lv_1 && sudo nvme connect -t loop -n lv_1 -q hostnqn' |
||||||||||||||
pass | 7845587 | 2024-08-09 13:35:40 | 2024-08-09 14:58:16 | 2024-08-09 15:27:17 | 0:29:01 | 0:19:54 | 0:09:07 | smithi | main | centos | 9.stream | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{centos_latest} thrashers/pggrow thrashosds-health workloads/small-objects-balanced} | 2 | |
fail | 7845588 | 2024-08-09 13:35:41 | 2024-08-09 14:58:16 | 2024-08-09 16:12:56 | 1:14:40 | 1:03:36 | 0:11:04 | smithi | main | centos | 9.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-low-osd-mem-target rados tasks/rados_api_tests validater/valgrind} | 2 | |
Failure Reason:
saw valgrind issues |
||||||||||||||
fail | 7845589 | 2024-08-09 13:35:42 | 2024-08-09 15:00:07 | 2024-08-09 17:50:48 | 2:50:41 | 2:21:56 | 0:28:45 | smithi | main | centos | 9.stream | rados/objectstore/{backends/objectstore-bluestore-a supported-random-distro$/{centos_latest}} | 1 | |
Failure Reason:
Command failed on smithi066 with status 1: 'sudo TESTDIR=/home/ubuntu/cephtest bash -c \'mkdir $TESTDIR/archive/ostest && cd $TESTDIR/archive/ostest && ulimit -Sn 16384 && CEPH_ARGS="--no-log-to-stderr --log-file $TESTDIR/archive/ceph_test_objectstore.log --debug-bluestore 20" ceph_test_objectstore --gtest_filter=*/2:-*SyntheticMatrixC* --gtest_catch_exceptions=0\'' |
||||||||||||||
fail | 7845590 | 2024-08-09 13:35:44 | 2024-08-09 15:00:07 | 2024-08-09 15:23:21 | 0:23:14 | 0:13:08 | 0:10:06 | smithi | main | centos | 9.stream | rados/dashboard/{0-single-container-host debug/mgr mon_election/classic random-objectstore$/{bluestore-stupid} tasks/e2e} | 2 | |
Failure Reason:
Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi164 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=434008a32f159a96b6c2c810cb953cfa0b51d571 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh' |
||||||||||||||
fail | 7845591 | 2024-08-09 13:35:45 | 2024-08-09 15:00:07 | 2024-08-09 15:23:08 | 0:23:01 | 0:14:14 | 0:08:47 | smithi | main | centos | 9.stream | rados/singleton/{all/mon-config-keys mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_latest}} | 1 | |
Failure Reason:
"1723216546.5623152 mon.c (mon.2) 58 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log |
||||||||||||||
fail | 7845592 | 2024-08-09 13:35:46 | 2024-08-09 15:00:08 | 2024-08-09 15:15:38 | 0:15:30 | 0:04:30 | 0:11:00 | smithi | main | centos | 9.stream | rados/cephadm/smoke/{0-distro/centos_9.stream 0-nvme-loop agent/on fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
Command failed on smithi146 with status 234: 'sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1 && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/attr_allow_any_host && sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1 && echo /dev/vg_nvme/lv_1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/device_path && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/enable && sudo ln -s /sys/kernel/config/nvmet/subsystems/lv_1 /sys/kernel/config/nvmet/ports/1/subsystems/lv_1 && sudo nvme connect -t loop -n lv_1 -q hostnqn' |