Name Machine Type Up Locked Locked Since Locked By OS Type OS Version Arch Description
smithi035.front.sepia.ceph.com smithi False True 2023-06-06 18:46:24.616177 scheduled_yuriw@teuthology centos 8 x86_64 Marked down by ceph-cm-ansible due to missing NVMe card 2023-06-06T19:09:28Z
Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
dead 7297088 2023-06-06 18:44:19 2023-06-06 18:45:54 2023-06-06 19:17:54 0:32:00 0:07:29 0:24:31 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools agent/off mon_election/classic task/test_orch_cli_mon} 5
Failure Reason:

{'smithi035.front.sepia.ceph.com': {'_ansible_no_log': False, 'changed': False, 'msg': 'Failing rest of playbook due to missing NVMe card'}}

dead 7297070 2023-06-06 18:44:08 2023-06-06 18:56:08 smithi main rhel 8.6 rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/few objectstore/bluestore-stupid rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4
Failure Reason:

Error reimaging machines: Expected smithi005's OS to be rhel 8.6 but found centos 8

dead 7297066 2023-06-06 18:44:06 2023-06-06 18:44:27 2023-06-06 18:57:56 0:13:29 smithi main centos 8.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-stupid rados tasks/mon_recovery validater/lockdep} 2
Failure Reason:

Error reimaging machines: reached maximum tries (101) after waiting for 600 seconds

pass 7296888 2023-06-06 14:50:01 2023-06-06 16:42:37 2023-06-06 17:21:31 0:38:54 0:27:09 0:11:45 smithi main ubuntu 20.04 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-zstd rados supported-random-distro$/{ubuntu_20.04} thrashers/mapgap thrashosds-health workloads/pool-snaps-few-objects} 2
pass 7296811 2023-06-06 14:49:04 2023-06-06 16:02:16 2023-06-06 16:43:15 0:40:59 0:31:06 0:09:53 smithi main centos 8.stream rados/singleton-nomsgr/{all/recovery-unfound-found mon_election/classic rados supported-random-distro$/{centos_8}} 1
pass 7296748 2023-06-06 14:48:17 2023-06-06 15:37:35 2023-06-06 16:02:43 0:25:08 0:18:36 0:06:32 smithi main rhel 8.6 rados/cephadm/osds/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop 1-start 2-ops/repave-all} 2
pass 7296689 2023-06-06 14:47:31 2023-06-06 14:48:09 2023-06-06 15:37:28 0:49:19 0:35:25 0:13:54 smithi main ubuntu 22.04 rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/mon-delay msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest} thrashers/one workloads/snaps-few-objects} 2
dead 7296320 2023-06-05 20:32:14 2023-06-05 20:32:47 2023-06-05 20:53:16 0:20:29 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools agent/off mon_election/classic task/test_orch_cli_mon} 5
Failure Reason:

Error reimaging machines: reached maximum tries (101) after waiting for 600 seconds

fail 7296007 2023-06-04 23:00:09 2023-06-04 23:00:10 2023-06-05 03:41:16 4:41:06 4:08:43 0:32:23 smithi main ubuntu 20.04 krbd/rbd/{bluestore-bitmap clusters/fixed-3 conf ms_mode/legacy$/{legacy-rxbounce} msgr-failures/many tasks/rbd_workunit_suites_dbench} 3
Failure Reason:

Command failed (workunit test suites/dbench.sh) on smithi107 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=aef7f6787496f28a04d0c6511a3205e79d473fa8 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/dbench.sh'

fail 7295906 2023-06-03 23:54:01 2023-06-04 02:38:01 2023-06-04 06:30:16 3:52:15 3:40:08 0:12:07 smithi main ubuntu 20.04 krbd/rbd-nomount/{bluestore-bitmap clusters/fixed-3 conf install/ceph ms_mode/secure msgr-failures/many tasks/rbd_concurrent} 3
Failure Reason:

Command failed (workunit test rbd/concurrent.sh) on smithi195 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=aef7f6787496f28a04d0c6511a3205e79d473fa8 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/concurrent.sh'

fail 7295869 2023-06-03 23:53:33 2023-06-04 02:00:49 2023-06-04 02:38:02 0:37:13 0:14:20 0:22:53 smithi main ubuntu 20.04 krbd/fsx/{ceph/ceph clusters/3-node conf features/no-object-map ms_mode$/{legacy-rxbounce} objectstore/bluestore-bitmap striping/default/{msgr-failures/few randomized-striping-off} tasks/fsx-3-client} 3
Failure Reason:

Command failed on smithi195 with status 91: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_librbd_fsx --cluster ceph --id 0 -d -W -R -p 100 -P /home/ubuntu/cephtest/archive -r 512 -w 512 -t 512 -h 512 -l 250000000 -S 0 -N 10000 -K -U pool_client.0 image_client.0'

pass 7295820 2023-06-03 23:52:57 2023-06-04 01:22:51 2023-06-04 01:52:14 0:29:23 0:14:49 0:14:34 smithi main ubuntu 20.04 krbd/rbd-nomount/{bluestore-bitmap clusters/fixed-3 conf install/ceph ms_mode/legacy$/{legacy} msgr-failures/few tasks/krbd_exclusive_option} 3
pass 7295790 2023-06-03 23:52:34 2023-06-04 00:52:53 2023-06-04 01:22:23 0:29:30 0:13:16 0:16:14 smithi main ubuntu 20.04 krbd/basic/{bluestore-bitmap ceph/ceph clusters/fixed-1 conf ms_mode/crc$/{crc-rxbounce} tasks/krbd_discard} 1
pass 7295750 2023-06-03 23:52:04 2023-06-03 23:52:53 2023-06-04 00:53:45 1:00:52 0:23:02 0:37:50 smithi main ubuntu 20.04 krbd/rbd/{bluestore-bitmap clusters/fixed-3 conf ms_mode/crc$/{crc-rxbounce} msgr-failures/few tasks/rbd_workunit_kernel_untar_build} 3
pass 7295624 2023-06-03 14:24:40 2023-06-03 18:52:26 2023-06-03 21:39:25 2:46:59 2:30:31 0:16:28 smithi main ubuntu 20.04 upgrade:quincy-x/stress-split/{0-distro/ubuntu_20.04 0-roles 1-start 2-first-half-tasks/rbd-cls 3-stress-tasks/{radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 4-second-half-tasks/radosbench mon_election/connectivity} 2
fail 7295584 2023-06-03 14:24:24 2023-06-03 17:18:22 2023-06-03 18:59:23 1:41:01 1:31:46 0:09:15 smithi main centos 8.stream upgrade:pacific-x/stress-split/{0-distro/centos_8.stream_container_tools 0-roles 1-start 2-first-half-tasks/radosbench 3-stress-tasks/{radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 4-second-half-tasks/rbd-import-export mon_election/classic} 2
Failure Reason:

Command failed (workunit test cls/test_cls_rbd.sh) on smithi019 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=pacific TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_rbd.sh'

pass 7295520 2023-06-03 14:24:00 2023-06-03 14:25:06 2023-06-03 17:18:28 2:53:22 2:34:43 0:18:39 smithi main ubuntu 20.04 upgrade:pacific-x/stress-split/{0-distro/ubuntu_20.04 0-roles 1-start 2-first-half-tasks/snaps-few-objects 3-stress-tasks/{radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 4-second-half-tasks/rbd-import-export mon_election/connectivity} 2
pass 7295273 2023-06-02 17:55:21 2023-06-03 03:07:19 2023-06-03 05:27:57 2:20:38 2:12:26 0:08:12 smithi main rhel 8.6 upgrade:quincy-x/stress-split/{0-distro/rhel_8.6_container_tools_rhel8 0-roles 1-start 2-first-half-tasks/rbd_api 3-stress-tasks/{radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 4-second-half-tasks/radosbench mon_election/connectivity} 2
fail 7295245 2023-06-02 17:54:47 2023-06-03 02:48:34 2023-06-03 03:07:54 0:19:20 0:10:32 0:08:48 smithi main ubuntu 22.04 rgw/crypt/{0-cluster/fixed-1 1-ceph-install/install 2-kms/kmip 3-rgw/rgw 4-tests/{s3tests} ignore-pg-availability s3tests-branch supported-random-distro$/{ubuntu_latest}} 1
Failure Reason:

Command failed on smithi035 with status 100: 'DEBIAN_FRONTEND=noninteractive sudo -E apt-get -y --force-yes install python-dev'

pass 7295156 2023-06-02 17:53:27 2023-06-03 01:18:12 2023-06-03 02:46:59 1:28:47 1:19:36 0:09:11 smithi main centos 8.stream rbd/valgrind/{base/install centos_latest clusters/{fixed-1 openstack} objectstore/bluestore-comp-snappy validator/memcheck workloads/c_api_tests_with_defaults} 1