Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 7527386 2024-01-22 19:47:01 2024-01-23 08:11:00 2024-01-23 08:37:04 0:26:04 0:12:43 0:13:21 smithi main centos 9.stream fs/traceless/{begin/{0-install 1-ceph 2-logrotate} clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{centos_latest} mount/fuse objectstore-ec/bluestore-bitmap overrides/{frag ignorelist_health ignorelist_wrongly_marked_down} tasks/cfuse_workunit_suites_fsstress traceless/50pc} 2
fail 7527387 2024-01-22 19:47:02 2024-01-23 08:14:22 2024-01-23 08:45:09 0:30:47 0:16:59 0:13:48 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/secure wsync/yes} objectstore-ec/bluestore-comp omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{balancer/random export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{with-namespace-isolated} 1-check-counter 2-scrub/no 3-snaps/no 4-flush/yes 5-workunit/suites/fsync-tester}} 3
Failure Reason:

Command failed on smithi006 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:0edf41a622739843d6b978b179ff3227b476dd9d pull'

fail 7527388 2024-01-22 19:47:03 2024-01-23 08:15:23 2024-01-23 12:07:42 3:52:19 3:38:24 0:13:55 smithi main centos 9.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/quincy 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

Command failed (workunit test suites/fsstress.sh) on smithi100 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && cd -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=9db6bac5568c8be6cfd98da20bd2b62582f0776f TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="1" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.1 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.1 CEPH_MNT=/home/ubuntu/cephtest/mnt.1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.1/qa/workunits/suites/fsstress.sh'

fail 7527389 2024-01-22 19:47:04 2024-01-23 08:16:23 2024-01-23 08:46:30 0:30:07 0:18:02 0:12:05 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/crc wsync/yes} objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{balancer/random export-check n/5 replication/always} standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/no 3-snaps/no 4-flush/yes 5-workunit/suites/iogen}} 3
Failure Reason:

Command failed on smithi012 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:0edf41a622739843d6b978b179ff3227b476dd9d pull'

fail 7527390 2024-01-22 19:47:05 2024-01-23 08:17:24 2024-01-23 08:49:53 0:32:29 0:21:25 0:11:04 smithi main centos 8.stream fs/upgrade/featureful_client/old_client/{bluestore-bitmap centos_8.stream clusters/1-mds-2-client-micro conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down multimds/no pg-warn} tasks/{0-octopus 1-client 2-upgrade 3-compat_client/quincy}} 3
Failure Reason:

Command failed on smithi077 with status 22: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph tell 4.1f deep-scrub'

fail 7527391 2024-01-22 19:47:05 2024-01-23 08:18:24 2024-01-23 09:30:57 1:12:33 0:50:30 0:22:03 smithi main centos 9.stream fs/verify/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_latest} mount/fuse objectstore-ec/bluestore-comp-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down mon-debug session_timeout} ranks/1 tasks/dbench validater/valgrind} 2
Failure Reason:

valgrind error: Leak_StillReachable malloc malloc strdup

fail 7527392 2024-01-22 19:47:06 2024-01-23 08:18:55 2024-01-23 09:04:32 0:45:37 0:35:11 0:10:26 smithi main centos 9.stream fs/shell/{begin/{0-install 1-ceph 2-logrotate} clusters/1-mds-1-client-coloc conf/{client mds mon osd} distro/centos_latest mount/fuse objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/cephfs-shell} 2
Failure Reason:

Test failure: test_reading_conf (tasks.cephfs.test_cephfs_shell.TestShellOpts)

dead 7527393 2024-01-22 19:47:07 2024-01-23 08:19:45 2024-01-23 20:30:00 12:10:15 smithi main centos 9.stream fs/valgrind/{begin/{0-install 1-ceph 2-logrotate} centos_latest debug mirror/{cephfs-mirror/one-per-cluster clients/mirror cluster/1-node mount/fuse overrides/ignorelist_health tasks/mirror}} 1
Failure Reason:

hit max job timeout

fail 7527394 2024-01-22 19:47:08 2024-01-23 08:19:45 2024-01-23 08:47:01 0:27:16 0:16:37 0:10:39 smithi main centos 8.stream fs/upgrade/nofs/{bluestore-bitmap centos_8.stream conf/{client mds mon osd} no-mds-cluster overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} tasks/{0-pacific 1-upgrade}} 1
Failure Reason:

Command failed on smithi169 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd dump --format=json'

fail 7527395 2024-01-22 19:47:09 2024-01-23 08:19:46 2024-01-23 08:44:52 0:25:06 0:09:08 0:15:58 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/secure wsync/yes} objectstore-ec/bluestore-comp omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/1 standby-replay tasks/{0-subvolume/{with-namespace-isolated} 1-check-counter 2-scrub/no 3-snaps/no 4-flush/yes 5-workunit/suites/pjd}} 3
Failure Reason:

Command failed on smithi016 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:0edf41a622739843d6b978b179ff3227b476dd9d pull'

fail 7527396 2024-01-22 19:47:09 2024-01-23 08:19:46 2024-01-23 09:35:24 1:15:38 1:03:32 0:12:06 smithi main centos 9.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/quincy 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

Command failed on smithi059 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:quincy shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 3fc07a38-b9cb-11ee-95b1-87774f69a715 -e sha1=0edf41a622739843d6b978b179ff3227b476dd9d -- bash -c \'ceph versions | jq -e \'"\'"\'.mgr | keys\'"\'"\' | grep $sha1\''

fail 7527397 2024-01-22 19:47:10 2024-01-23 08:20:07 2024-01-23 08:52:19 0:32:12 0:21:17 0:10:55 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/crc wsync/yes} objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{balancer/random export-check n/5 replication/default} standby-replay tasks/{0-subvolume/{with-namespace-isolated} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/yes 5-workunit/fs/misc}} 3
Failure Reason:

Command failed on smithi031 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:0edf41a622739843d6b978b179ff3227b476dd9d pull'

fail 7527398 2024-01-22 19:47:11 2024-01-23 08:21:18 2024-01-23 09:00:21 0:39:03 0:30:14 0:08:49 smithi main centos 8.stream fs/upgrade/featureful_client/old_client/{bluestore-bitmap centos_8.stream clusters/1-mds-2-client-micro conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down multimds/yes pg-warn} tasks/{0-octopus 1-client 2-upgrade 3-compat_client/quincy}} 3
Failure Reason:

Command failed on smithi079 with status 22: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph tell 4.3a deep-scrub'

fail 7527399 2024-01-22 19:47:12 2024-01-23 08:21:18 2024-01-23 08:59:55 0:38:37 0:21:29 0:17:08 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/secure wsync/yes} objectstore-ec/bluestore-comp omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{balancer/random export-check n/3 replication/default} standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/yes 5-workunit/postgres}} 3
Failure Reason:

Command failed on smithi141 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:0edf41a622739843d6b978b179ff3227b476dd9d pull'

fail 7527400 2024-01-22 19:47:12 2024-01-23 08:22:39 2024-01-23 12:15:00 3:52:21 3:36:26 0:15:55 smithi main centos 9.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/quincy 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

Command failed (workunit test suites/fsstress.sh) on smithi188 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && cd -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=9db6bac5568c8be6cfd98da20bd2b62582f0776f TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="1" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.1 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.1 CEPH_MNT=/home/ubuntu/cephtest/mnt.1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.1/qa/workunits/suites/fsstress.sh'

fail 7527401 2024-01-22 19:47:13 2024-01-23 08:25:50 2024-01-23 08:56:29 0:30:39 0:16:11 0:14:28 smithi main ubuntu 22.04 fs/multifs/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-2c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore-ec/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down mon-debug} tasks/multifs-auth} 2
Failure Reason:

Test failure: test_r_with_fsname_and_no_path_in_cap (tasks.cephfs.test_multifs_auth.TestMDSCaps)

fail 7527402 2024-01-22 19:47:14 2024-01-23 08:27:10 2024-01-23 09:33:23 1:06:13 0:55:40 0:10:33 smithi main ubuntu 22.04 fs/verify/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu/{overrides ubuntu_latest}} mount/fuse objectstore-ec/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down mon-debug session_timeout} ranks/5 tasks/dbench validater/valgrind} 2
Failure Reason:

valgrind error: Leak_StillReachable calloc calloc _dl_check_map_versions

fail 7527403 2024-01-22 19:47:15 2024-01-23 08:27:41 2024-01-23 09:02:20 0:34:39 0:21:55 0:12:44 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/crc wsync/yes} objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{balancer/automatic export-check n/5 replication/default} standby-replay tasks/{0-subvolume/{with-namespace-isolated} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/yes 5-workunit/suites/dbench}} 3
Failure Reason:

Command failed on smithi032 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:0edf41a622739843d6b978b179ff3227b476dd9d pull'

fail 7527404 2024-01-22 19:47:15 2024-01-23 08:29:01 2024-01-23 09:03:10 0:34:09 0:21:57 0:12:12 smithi main centos 8.stream fs/upgrade/featureful_client/old_client/{bluestore-bitmap centos_8.stream clusters/1-mds-2-client-micro conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down multimds/no pg-warn} tasks/{0-octopus 1-client 2-upgrade 3-compat_client/quincy}} 3
Failure Reason:

Command failed on smithi118 with status 22: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph tell 4.27 deep-scrub'

fail 7527405 2024-01-22 19:47:16 2024-01-23 08:29:12 2024-01-23 08:53:53 0:24:41 0:10:18 0:14:23 smithi main centos 9.stream fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{centos_latest} mount/fuse objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} subvol_versions/create_subvol_version_v1 tasks/pool-perm} 2
Failure Reason:

Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)

fail 7527406 2024-01-22 19:47:17 2024-01-23 08:33:03 2024-01-23 12:56:44 4:23:41 3:59:46 0:23:55 smithi main centos 9.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/quincy 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

Command failed (workunit test suites/fsstress.sh) on smithi176 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && cd -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=9db6bac5568c8be6cfd98da20bd2b62582f0776f TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="1" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.1 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.1 CEPH_MNT=/home/ubuntu/cephtest/mnt.1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.1/qa/workunits/suites/fsstress.sh'

fail 7527407 2024-01-22 19:47:18 2024-01-23 08:33:33 2024-01-23 09:01:14 0:27:41 0:17:58 0:09:43 smithi main centos 9.stream fs/cephadm/renamevolume/{0-start 1-rename distro/single-container-host overrides/ignorelist_health} 2
Failure Reason:

Command failed on smithi110 with status 1: "sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:0edf41a622739843d6b978b179ff3227b476dd9d shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 6db5ddf6-b9cc-11ee-95b1-87774f69a715 -- bash -c 'ceph fs volume rename foo bar --yes-i-really-mean-it'"

fail 7527408 2024-01-22 19:47:19 2024-01-23 08:34:14 2024-01-23 08:59:41 0:25:27 0:12:41 0:12:46 smithi main ubuntu 22.04 fs/shell/{begin/{0-install 1-ceph 2-logrotate} clusters/1-mds-1-client-coloc conf/{client mds mon osd} distro/ubuntu_latest mount/fuse objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/cephfs-shell} 2
Failure Reason:

Test failure: test_cd_with_args (tasks.cephfs.test_cephfs_shell.TestCD)

fail 7527409 2024-01-22 19:47:19 2024-01-23 08:35:04 2024-01-23 08:55:24 0:20:20 0:09:04 0:11:16 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/secure wsync/yes} objectstore-ec/bluestore-comp omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{balancer/automatic export-check n/3 replication/default} standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/yes 5-workunit/fs/norstats}} 3
Failure Reason:

Command failed on smithi089 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:0edf41a622739843d6b978b179ff3227b476dd9d pull'

fail 7527410 2024-01-22 19:47:20 2024-01-23 08:36:25 2024-01-23 09:03:19 0:26:54 0:18:07 0:08:47 smithi main centos 8.stream fs/upgrade/nofs/{bluestore-bitmap centos_8.stream conf/{client mds mon osd} no-mds-cluster overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} tasks/{0-pacific 1-upgrade}} 1
Failure Reason:

Command failed on smithi131 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd dump --format=json'

fail 7527411 2024-01-22 19:47:21 2024-01-23 08:36:25 2024-01-23 10:57:04 2:20:39 2:08:14 0:12:25 smithi main ubuntu 22.04 fs/verify/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu/{overrides ubuntu_latest}} mount/kclient/{k-testing mount ms-die-on-skipped} objectstore-ec/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down mon-debug session_timeout} ranks/1 tasks/fsstress validater/valgrind} 2
Failure Reason:

valgrind error: Leak_PossiblyLost calloc __trans_list_add

fail 7527412 2024-01-22 19:47:22 2024-01-23 08:37:06 2024-01-23 08:57:46 0:20:40 0:08:48 0:11:52 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/crc wsync/yes} objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{balancer/automatic export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{with-namespace-isolated} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/yes 5-workunit/suites/fsx}} 3
Failure Reason:

Command failed on smithi052 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:0edf41a622739843d6b978b179ff3227b476dd9d pull'

fail 7527413 2024-01-22 19:47:22 2024-01-23 08:38:46 2024-01-23 09:48:15 1:09:29 0:56:51 0:12:38 smithi main centos 9.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/quincy 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

Command failed on smithi043 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:quincy shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 269c41a2-b9cd-11ee-95b1-87774f69a715 -e sha1=0edf41a622739843d6b978b179ff3227b476dd9d -- bash -c \'ceph versions | jq -e \'"\'"\'.mgr | keys\'"\'"\' | grep $sha1\''

pass 7527414 2024-01-22 19:47:23 2024-01-23 08:39:27 2024-01-23 09:23:54 0:44:27 0:29:40 0:14:47 smithi main centos 8.stream fs/upgrade/featureful_client/old_client/{bluestore-bitmap centos_8.stream clusters/1-mds-2-client-micro conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down multimds/yes pg-warn} tasks/{0-octopus 1-client 2-upgrade 3-compat_client/quincy}} 3
fail 7527415 2024-01-22 19:47:24 2024-01-23 08:40:37 2024-01-23 09:04:01 0:23:24 0:09:09 0:14:15 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/secure wsync/yes} objectstore-ec/bluestore-comp omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{balancer/automatic export-check n/5 replication/always} standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/yes 5-workunit/fs/test_o_trunc}} 3
Failure Reason:

Command failed on smithi016 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:0edf41a622739843d6b978b179ff3227b476dd9d pull'