Name Machine Type Up Locked Locked Since Locked By OS Type OS Version Arch Description
smithi064.front.sepia.ceph.com smithi True True 2024-06-02 21:08:29.268678 scheduled_teuthology@teuthology centos 9 x86_64 /home/teuthworker/archive/teuthology-2024-06-02_21:00:03-rados-squid-distro-default-smithi/7738246
Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
running 7738246 2024-06-02 21:01:48 2024-06-02 21:05:28 2024-06-02 22:45:40 1:40:31 smithi main centos 9.stream rados/upgrade/parallel/{0-random-distro$/{centos_9.stream_runc} 0-start 1-tasks mon_election/classic upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} 2
fail 7737868 2024-06-02 17:35:59 2024-06-02 20:33:35 2024-06-02 21:08:27 0:34:52 0:25:53 0:08:59 smithi main centos 9.stream rados/singleton-bluestore/{all/cephtool mon_election/classic msgr-failures/many msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_latest}} 1
Failure Reason:

Command failed (workunit test cephtool/test.sh) on smithi064 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=152c981df46be22d613dcdb815661a8256f3d4d9 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh'

fail 7737850 2024-06-02 17:35:40 2024-06-02 20:25:07 2024-06-02 20:33:24 0:08:17 smithi main centos 9.stream rados/thrash-erasure-code/{ceph clusters/{fixed-4 openstack} fast/normal mon_election/connectivity msgr-failures/few objectstore/bluestore-stupid rados recovery-overrides/{default} supported-random-distro$/{centos_latest} thrashers/careful thrashosds-health workloads/ec-rados-plugin=jerasure-k=8-m=6-crush} 8
Failure Reason:

too many values to unpack (expected 1)

dead 7737838 2024-06-02 17:35:26 2024-06-02 20:12:31 2024-06-02 20:13:45 0:01:14 smithi main centos 9.stream rados/multimon/{clusters/21 mon_election/classic msgr-failures/few msgr/async no_pools objectstore/bluestore-hybrid rados supported-random-distro$/{centos_latest} tasks/mon_clock_no_skews} 3
Failure Reason:

Error reimaging machines: Failed to power on smithi064

fail 7737784 2024-06-02 17:34:24 2024-06-02 19:39:35 2024-06-02 20:12:37 0:33:02 0:24:03 0:08:59 smithi main centos 9.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-zlib rados tasks/rados_api_tests validater/lockdep} 2
Failure Reason:

Command failed (workunit test rados/test.sh) on smithi012 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=152c981df46be22d613dcdb815661a8256f3d4d9 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 ALLOW_TIMEOUTS=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh'

dead 7737746 2024-06-02 17:33:42 2024-06-02 19:24:09 2024-06-02 19:25:13 0:01:04 smithi main centos 9.stream rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/normal ceph clusters/{fixed-4 openstack} crc-failures/bad_map_crc_failure d-balancer/on mon_election/classic msgr-failures/osd-delay msgr/async-v1only objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_latest} thrashers/careful thrashosds-health workloads/cache-snaps-balanced} 4
Failure Reason:

Error reimaging machines: Failed to power on smithi130

dead 7737729 2024-06-02 17:33:23 2024-06-02 19:11:11 2024-06-02 19:12:15 0:01:04 smithi main centos 9.stream rados/multimon/{clusters/3 mon_election/connectivity msgr-failures/many msgr/async no_pools objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_latest} tasks/mon_clock_no_skews} 2
Failure Reason:

Error reimaging machines: Failed to power on smithi064

pass 7737651 2024-06-02 17:31:55 2024-06-02 18:22:25 2024-06-02 19:11:07 0:48:42 0:35:02 0:13:40 smithi main ubuntu 22.04 rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/mon-delay msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest} thrashers/one workloads/snaps-few-objects} 2
fail 7737549 2024-06-02 15:35:37 2024-06-02 15:42:51 2024-06-02 17:02:09 1:19:18 1:09:51 0:09:27 smithi main centos 9.stream crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/simple thrashosds-health workloads/radosbench-high-concurrency} 2
Failure Reason:

reached maximum tries (501) after waiting for 3000 seconds

pass 7737468 2024-06-02 05:18:41 2024-06-02 07:18:07 2024-06-02 08:19:10 1:01:03 0:48:45 0:12:18 smithi main ubuntu 20.04 smoke/basic/{clusters/{fixed-3-cephfs openstack} objectstore/bluestore-bitmap s3tests-branch supported-all-distro/ubuntu_20.04 tasks/{0-install test/rbd_api_tests}} 3
dead 7737449 2024-06-02 05:18:20 2024-06-02 07:01:55 2024-06-02 07:03:40 0:01:45 smithi main ubuntu 22.04 smoke/basic/{clusters/{fixed-3-cephfs openstack} objectstore/bluestore-bitmap s3tests-branch supported-all-distro/ubuntu_latest tasks/{0-install test/rgw_ec_s3tests}} 3
Failure Reason:

Error reimaging machines: Failed to power on smithi039

pass 7737411 2024-06-02 05:17:40 2024-06-02 06:28:59 2024-06-02 07:02:36 0:33:37 0:21:29 0:12:08 smithi main centos 9.stream smoke/basic/{clusters/{fixed-3-cephfs openstack} objectstore/bluestore-bitmap s3tests-branch supported-all-distro/centos_latest tasks/{0-install test/libcephfs_interface_tests}} 3
pass 7737371 2024-06-02 05:16:58 2024-06-02 05:56:04 2024-06-02 06:30:31 0:34:27 0:22:40 0:11:47 smithi main centos 9.stream smoke/basic/{clusters/{fixed-3-cephfs openstack} objectstore/bluestore-bitmap s3tests-branch supported-all-distro/centos_latest tasks/{0-install test/rados_workunit_loadgen_mix}} 3
fail 7737312 2024-06-02 05:01:51 2024-06-02 05:14:04 2024-06-02 05:56:26 0:42:22 0:28:46 0:13:36 smithi main centos 9.stream smoke/basic/{clusters/{fixed-3-cephfs openstack} objectstore/bluestore-bitmap supported-random-distro$/{centos_latest} tasks/{0-install test/mon_thrash}} 3
Failure Reason:

Command failed (workunit test rados/test.sh) on smithi064 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=17c6b5a14202fceea4b4134eaef845f7dbda470b TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh'

dead 7737303 2024-06-02 05:01:41 2024-06-02 05:04:30 2024-06-02 05:07:24 0:02:54 smithi main centos 9.stream smoke/basic/{clusters/{fixed-3-cephfs openstack} objectstore/bluestore-bitmap supported-random-distro$/{centos_latest} tasks/{0-install test/cfuse_workunit_suites_blogbench}} 3
Failure Reason:

Error reimaging machines: Failed to power on smithi039

pass 7737286 2024-06-01 22:49:46 2024-06-02 12:00:11 2024-06-02 12:36:03 0:35:52 0:26:11 0:09:41 smithi main centos 9.stream krbd/fsx/{ceph/ceph clusters/3-node conf features/no-deep-flatten ms_mode$/{crc-rxbounce} objectstore/bluestore-bitmap striping/fancy/{msgr-failures/few randomized-striping-on} tasks/fsx-1-client} 3
fail 7737255 2024-06-01 22:49:16 2024-06-02 11:37:16 2024-06-02 12:00:51 0:23:35 0:12:07 0:11:28 smithi main centos 9.stream krbd/rbd/{bluestore-bitmap clusters/fixed-3 conf ms_mode/crc$/{crc} msgr-failures/few tasks/rbd_workunit_suites_fsx} 3
Failure Reason:

Command failed (workunit test suites/fsx.sh) on smithi192 with status 2: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=71c5cfc99f93c162032256d34d9b60b8288fad91 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/fsx.sh'

fail 7737219 2024-06-01 22:48:40 2024-06-02 11:01:58 2024-06-02 11:37:41 0:35:43 0:23:27 0:12:16 smithi main centos 9.stream krbd/fsx/{ceph/ceph clusters/3-node conf features/no-deep-flatten ms_mode$/{legacy} objectstore/bluestore-bitmap striping/default/{msgr-failures/few randomized-striping-off} tasks/fsx-1-client} 3
Failure Reason:

SELinux denials found on ubuntu@smithi064.front.sepia.ceph.com: ['type=AVC msg=audit(1717326845.510:196): avc: denied { checkpoint_restore } for pid=1089 comm="agetty" capability=40 scontext=system_u:system_r:getty_t:s0-s0:c0.c1023 tcontext=system_u:system_r:getty_t:s0-s0:c0.c1023 tclass=capability2 permissive=1']

fail 7737190 2024-06-01 21:49:07 2024-06-02 01:41:10 2024-06-02 02:07:53 0:26:43 0:15:29 0:11:14 smithi main centos 9.stream krbd/rbd/{bluestore-bitmap clusters/fixed-3 conf ms_mode/legacy$/{legacy-rxbounce} msgr-failures/few tasks/rbd_workunit_suites_iozone} 3
Failure Reason:

SELinux denials found on ubuntu@smithi064.front.sepia.ceph.com: ['type=AVC msg=audit(1717293121.511:208): avc: denied { checkpoint_restore } for pid=1162 comm="agetty" capability=40 scontext=system_u:system_r:getty_t:s0-s0:c0.c1023 tcontext=system_u:system_r:getty_t:s0-s0:c0.c1023 tclass=capability2 permissive=1']

pass 7737161 2024-06-01 21:48:37 2024-06-02 01:13:52 2024-06-02 01:42:11 0:28:19 0:16:42 0:11:37 smithi main centos 9.stream krbd/rbd-nomount/{bluestore-bitmap clusters/fixed-3 conf install/ceph ms_mode/legacy$/{legacy-rxbounce} msgr-failures/many tasks/krbd_latest_osdmap_on_map} 3