User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
yuriw | 2024-08-21 20:34:21 | 2024-08-21 20:36:37 | 2024-08-22 06:50:55 | 10:14:18 | rados | wip-yuri11-testing-2024-08-20-1207-squid | smithi | cbfba43 | 1 | 17 | 2 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 7866290 | 2024-08-21 20:35:42 | 2024-08-21 20:36:37 | 2024-08-21 21:34:13 | 0:57:36 | 0:48:26 | 0:09:10 | smithi | main | ubuntu | 22.04 | rados/standalone/{supported-random-distro$/{ubuntu_latest} workloads/osd} | 1 | |
Failure Reason:
Command failed (workunit test osd/osd-bluefs-volume-ops.sh) on smithi045 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=cbfba43a549f8dace8c3a3174652322d6ce7f5db TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/osd/osd-bluefs-volume-ops.sh' |
||||||||||||||
fail | 7866291 | 2024-08-21 20:35:43 | 2024-08-21 20:36:37 | 2024-08-21 20:45:57 | 0:09:20 | 0:01:49 | 0:07:31 | smithi | main | centos | 9.stream | rados/cephadm/osds/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-ops/rm-zap-add} | 2 | |
Failure Reason:
Command failed on smithi079 with status 234: 'sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1 && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/attr_allow_any_host && sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1 && echo -n /dev/vg_nvme/lv_1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/device_path && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/enable && sudo ln -s /sys/kernel/config/nvmet/subsystems/lv_1 /sys/kernel/config/nvmet/ports/1/subsystems/lv_1 && sudo nvme connect -t loop -n lv_1 -q hostnqn' |
||||||||||||||
fail | 7866292 | 2024-08-21 20:35:45 | 2024-08-21 20:36:58 | 2024-08-21 22:46:35 | 2:09:37 | 2:02:58 | 0:06:39 | smithi | main | centos | 9.stream | rados/standalone/{supported-random-distro$/{centos_latest} workloads/scrub} | 1 | |
Failure Reason:
Command failed (workunit test scrub/osd-scrub-test.sh) on smithi122 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=cbfba43a549f8dace8c3a3174652322d6ce7f5db TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/scrub/osd-scrub-test.sh' |
||||||||||||||
fail | 7866293 | 2024-08-21 20:35:46 | 2024-08-21 20:37:28 | 2024-08-21 20:46:08 | 0:08:40 | 0:01:53 | 0:06:47 | smithi | main | centos | 9.stream | rados/cephadm/osds/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-ops/rm-zap-flag} | 2 | |
Failure Reason:
Command failed on smithi017 with status 234: 'sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1 && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/attr_allow_any_host && sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1 && echo -n /dev/vg_nvme/lv_1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/device_path && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/enable && sudo ln -s /sys/kernel/config/nvmet/subsystems/lv_1 /sys/kernel/config/nvmet/ports/1/subsystems/lv_1 && sudo nvme connect -t loop -n lv_1 -q hostnqn' |
||||||||||||||
dead | 7866294 | 2024-08-21 20:35:47 | 2024-08-21 20:37:29 | 2024-08-22 06:50:08 | 10:12:39 | smithi | main | ubuntu | 22.04 | rados/singleton-nomsgr/{all/admin_socket_output mon_election/classic rados supported-random-distro$/{ubuntu_latest}} | 1 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 7866295 | 2024-08-21 20:35:49 | 2024-08-21 20:37:39 | 2024-08-21 23:07:11 | 2:29:32 | 2:17:17 | 0:12:15 | smithi | main | ubuntu | 22.04 | rados/upgrade/parallel/{0-random-distro$/{ubuntu_22.04} 0-start 1-tasks mon_election/classic upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} | 2 | |
Failure Reason:
"2024-08-21T21:10:10.664989+0000 mon.a (mon.0) 678 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log |
||||||||||||||
fail | 7866296 | 2024-08-21 20:35:50 | 2024-08-21 20:37:50 | 2024-08-21 20:46:43 | 0:08:53 | 0:01:48 | 0:07:05 | smithi | main | centos | 9.stream | rados/cephadm/smoke/{0-distro/centos_9.stream 0-nvme-loop agent/off fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
Command failed on smithi089 with status 234: 'sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1 && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/attr_allow_any_host && sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1 && echo -n /dev/vg_nvme/lv_1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/device_path && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/enable && sudo ln -s /sys/kernel/config/nvmet/subsystems/lv_1 /sys/kernel/config/nvmet/ports/1/subsystems/lv_1 && sudo nvme connect -t loop -n lv_1 -q hostnqn' |
||||||||||||||
fail | 7866297 | 2024-08-21 20:35:51 | 2024-08-21 20:37:50 | 2024-08-21 20:46:34 | 0:08:44 | 0:01:51 | 0:06:53 | smithi | main | centos | 9.stream | rados/cephadm/osds/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-ops/rmdir-reactivate} | 2 | |
Failure Reason:
Command failed on smithi022 with status 234: 'sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1 && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/attr_allow_any_host && sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1 && echo -n /dev/vg_nvme/lv_1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/device_path && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/enable && sudo ln -s /sys/kernel/config/nvmet/subsystems/lv_1 /sys/kernel/config/nvmet/ports/1/subsystems/lv_1 && sudo nvme connect -t loop -n lv_1 -q hostnqn' |
||||||||||||||
fail | 7866298 | 2024-08-21 20:35:53 | 2024-08-21 20:37:50 | 2024-08-21 21:05:25 | 0:27:35 | 0:19:56 | 0:07:39 | smithi | main | centos | 9.stream | rados/dashboard/{0-single-container-host debug/mgr mon_election/connectivity random-objectstore$/{bluestore-hybrid} tasks/e2e} | 2 | |
Failure Reason:
"2024-08-21T21:01:44.481262+0000 mon.a (mon.0) 593 : cluster [WRN] Health check failed: Degraded data redundancy: 1 pg degraded (PG_DEGRADED)" in cluster log |
||||||||||||||
fail | 7866299 | 2024-08-21 20:35:54 | 2024-08-21 20:38:21 | 2024-08-21 22:29:04 | 1:50:43 | 1:43:27 | 0:07:16 | smithi | main | centos | 9.stream | rados/thrash-old-clients/{0-distro$/{centos_9.stream} 0-size-min-size-overrides/3-size-2-min-size 1-install/reef backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat mon_election/connectivity msgr-failures/osd-delay rados thrashers/default thrashosds-health workloads/radosbench} | 3 | |
Failure Reason:
"2024-08-21T21:20:00.000151+0000 mon.a (mon.0) 2819 : cluster [WRN] pg 6.3 is active+recovering+undersized+degraded+remapped, acting [7,6]" in cluster log |
||||||||||||||
fail | 7866300 | 2024-08-21 20:35:56 | 2024-08-21 20:39:52 | 2024-08-21 20:49:56 | 0:10:04 | 0:01:51 | 0:08:13 | smithi | main | centos | 9.stream | rados/cephadm/smoke/{0-distro/centos_9.stream_runc 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} | 2 | |
Failure Reason:
Command failed on smithi152 with status 234: 'sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1 && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/attr_allow_any_host && sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1 && echo -n /dev/vg_nvme/lv_1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/device_path && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/enable && sudo ln -s /sys/kernel/config/nvmet/subsystems/lv_1 /sys/kernel/config/nvmet/ports/1/subsystems/lv_1 && sudo nvme connect -t loop -n lv_1 -q hostnqn' |
||||||||||||||
fail | 7866301 | 2024-08-21 20:35:57 | 2024-08-21 20:40:53 | 2024-08-21 21:06:57 | 0:26:04 | 0:17:05 | 0:08:59 | smithi | main | ubuntu | 22.04 | rados/singleton/{all/radostool mon_election/classic msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{ubuntu_latest}} | 1 | |
Failure Reason:
Command failed (workunit test rados/test_rados_tool.sh) on smithi084 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=cbfba43a549f8dace8c3a3174652322d6ce7f5db TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_rados_tool.sh' |
||||||||||||||
fail | 7866302 | 2024-08-21 20:35:58 | 2024-08-21 20:41:23 | 2024-08-21 20:51:58 | 0:10:35 | 0:01:53 | 0:08:42 | smithi | main | centos | 9.stream | rados/cephadm/osds/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-ops/repave-all} | 2 | |
Failure Reason:
Command failed on smithi018 with status 234: 'sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1 && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/attr_allow_any_host && sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1 && echo -n /dev/vg_nvme/lv_1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/device_path && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/enable && sudo ln -s /sys/kernel/config/nvmet/subsystems/lv_1 /sys/kernel/config/nvmet/ports/1/subsystems/lv_1 && sudo nvme connect -t loop -n lv_1 -q hostnqn' |
||||||||||||||
dead | 7866303 | 2024-08-21 20:36:00 | 2024-08-21 20:43:14 | 2024-08-22 06:50:55 | 10:07:41 | smithi | main | centos | 9.stream | rados/singleton-nomsgr/{all/admin_socket_output mon_election/connectivity rados supported-random-distro$/{centos_latest}} | 1 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 7866304 | 2024-08-21 20:36:01 | 2024-08-21 20:43:14 | 2024-08-21 22:42:57 | 1:59:43 | 1:52:29 | 0:07:14 | smithi | main | centos | 9.stream | rados/upgrade/parallel/{0-random-distro$/{centos_9.stream} 0-start 1-tasks mon_election/connectivity upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} | 2 | |
Failure Reason:
"2024-08-21T20:58:01.683499+0000 mon.a (mon.0) 520 : cluster [WRN] Health check failed: Degraded data redundancy: 2/6 objects degraded (33.333%), 1 pg degraded (PG_DEGRADED)" in cluster log |
||||||||||||||
pass | 7866305 | 2024-08-21 20:36:02 | 2024-08-21 20:43:44 | 2024-08-21 21:23:50 | 0:40:06 | 0:32:09 | 0:07:57 | smithi | main | centos | 9.stream | rados/thrash-old-clients/{0-distro$/{centos_9.stream} 0-size-min-size-overrides/3-size-2-min-size 1-install/reef backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/crush-compat mon_election/connectivity msgr-failures/few rados thrashers/morepggrow thrashosds-health workloads/snaps-few-objects} | 3 | |
fail | 7866306 | 2024-08-21 20:36:04 | 2024-08-21 20:44:35 | 2024-08-21 22:40:16 | 1:55:41 | 1:48:23 | 0:07:18 | smithi | main | centos | 9.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-zstd rados tasks/rados_cls_all validater/valgrind} | 2 | |
Failure Reason:
Command failed on smithi164 with status 32: 'sync && sudo umount -f /var/lib/ceph/osd/ceph-7' |
||||||||||||||
fail | 7866307 | 2024-08-21 20:36:05 | 2024-08-21 20:44:35 | 2024-08-21 20:53:56 | 0:09:21 | 0:01:50 | 0:07:31 | smithi | main | centos | 9.stream | rados/cephadm/osds/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-ops/rm-zap-flag} | 2 | |
Failure Reason:
Command failed on smithi088 with status 234: 'sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1 && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/attr_allow_any_host && sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1 && echo -n /dev/vg_nvme/lv_1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/device_path && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/enable && sudo ln -s /sys/kernel/config/nvmet/subsystems/lv_1 /sys/kernel/config/nvmet/ports/1/subsystems/lv_1 && sudo nvme connect -t loop -n lv_1 -q hostnqn' |
||||||||||||||
fail | 7866308 | 2024-08-21 20:36:06 | 2024-08-21 20:44:36 | 2024-08-21 21:12:23 | 0:27:47 | 0:20:09 | 0:07:38 | smithi | main | centos | 9.stream | rados/dashboard/{0-single-container-host debug/mgr mon_election/classic random-objectstore$/{bluestore-comp-zlib} tasks/e2e} | 2 | |
Failure Reason:
"2024-08-21T21:09:32.475787+0000 mon.a (mon.0) 599 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
fail | 7866309 | 2024-08-21 20:36:08 | 2024-08-21 20:44:46 | 2024-08-21 20:55:00 | 0:10:14 | 0:01:52 | 0:08:22 | smithi | main | centos | 9.stream | rados/cephadm/osds/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-ops/rm-zap-wait} | 2 | |
Failure Reason:
Command failed on smithi026 with status 234: 'sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1 && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/attr_allow_any_host && sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1 && echo -n /dev/vg_nvme/lv_1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/device_path && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/enable && sudo ln -s /sys/kernel/config/nvmet/subsystems/lv_1 /sys/kernel/config/nvmet/ports/1/subsystems/lv_1 && sudo nvme connect -t loop -n lv_1 -q hostnqn' |