Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 7316219 2023-06-26 12:25:44 2023-06-26 12:26:36 2023-06-26 13:05:56 0:39:20 0:15:55 0:23:25 smithi main ubuntu 20.04 fs/libcephfs/{begin/{0-install 1-ceph 2-logrotate} clusters/1-mds-1-client-coloc conf/{client mds mon osd} distro/{ubuntu_20.04} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/libcephfs_python} 2
Failure Reason:

Command failed (workunit test fs/test_python.sh) on smithi118 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=103353ef99ef73fad56b81ff8231522b72313148 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/fs/test_python.sh'

fail 7316220 2023-06-26 12:25:44 2023-06-26 12:26:36 2023-06-26 13:10:15 0:43:39 0:24:17 0:19:22 smithi main ubuntu 22.04 fs/mixed-clients/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-2c-client conf/{client mds mon osd} distro/{ubuntu/{latest overrides}} kclient-overrides/{distro/testing/k-testing ms-die-on-skipped} objectstore-ec/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down osd-asserts} tasks/kernel_cfuse_workunits_untarbuild_blogbench} 2
Failure Reason:

Command failed (workunit test kernel_untar_build.sh) on smithi038 with status 2: 'mkdir -p -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && cd -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=103353ef99ef73fad56b81ff8231522b72313148 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="1" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.1 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.1 CEPH_MNT=/home/ubuntu/cephtest/mnt.1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.1/qa/workunits/kernel_untar_build.sh'

fail 7316221 2023-06-26 12:25:45 2023-06-26 12:26:37 2023-06-26 13:06:55 0:40:18 0:16:25 0:23:53 smithi main ubuntu 22.04 fs/multiclient/{begin/{0-install 1-ceph 2-logrotate} clusters/1-mds-3-client conf/{client mds mon osd} distros/ubuntu_latest mount/fuse objectstore-ec/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down} tasks/mdtest} 5
Failure Reason:

Command failed on smithi139 with status 2: 'TESTDIR=/home/ubuntu/cephtest bash -s'

pass 7316222 2023-06-26 12:25:46 2023-06-26 12:26:37 2023-06-26 13:17:40 0:51:03 0:39:27 0:11:36 smithi main centos 8.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/pacific 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
pass 7316223 2023-06-26 12:25:46 2023-06-26 12:26:38 2023-06-26 13:38:19 1:11:41 0:53:36 0:18:05 smithi main centos 8.stream fs/cephadm/multivolume/{0-start 1-mount 2-workload/dbench distro/single-container-host} 2
fail 7316224 2023-06-26 12:25:47 2023-06-26 12:26:38 2023-06-26 13:20:54 0:54:16 0:41:52 0:12:24 smithi main rhel 8.6 fs/mirror-ha/{begin/{0-install 1-ceph 2-logrotate} cephfs-mirror/three-per-cluster clients/{mirror} cluster/{1-node} objectstore/bluestore-bitmap overrides/{whitelist_health} supported-random-distro$/{rhel_8} workloads/cephfs-mirror-ha-workunit} 1
Failure Reason:

reached maximum tries (51) after waiting for 300 seconds

pass 7316225 2023-06-26 12:25:48 2023-06-26 12:26:38 2023-06-26 13:27:03 1:00:25 0:37:21 0:23:04 smithi main ubuntu 22.04 fs/multifs/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-2c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse objectstore-ec/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down mon-debug} tasks/multifs-auth} 2
fail 7316226 2023-06-26 12:25:48 2023-06-26 12:26:39 2023-06-26 14:07:37 1:40:58 1:30:45 0:10:13 smithi main centos 8.stream fs/valgrind/{begin/{0-install 1-ceph 2-logrotate} centos_latest debug mirror/{cephfs-mirror/one-per-cluster clients/mirror cluster/1-node mount/fuse overrides/whitelist_health tasks/mirror}} 1
Failure Reason:

Test failure: test_cephfs_mirror_restart_sync_on_blocklist (tasks.cephfs.test_mirroring.TestMirroring)

pass 7316227 2023-06-26 12:25:49 2023-06-26 12:26:39 2023-06-26 13:11:16 0:44:37 0:33:25 0:11:12 smithi main rhel 8.6 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/fuse msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp overrides/{frag ignorelist_health ignorelist_wrongly_marked_down prefetch_dirfrags/yes prefetch_entire_dirfrags/yes races session_timeout thrashosds-health} ranks/3 tasks/{1-thrash/mds 2-workunit/suites/fsstress}} 2
pass 7316228 2023-06-26 12:25:50 2023-06-26 12:26:40 2023-06-26 13:38:50 1:12:10 0:51:05 0:21:05 smithi main ubuntu 22.04 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/exports} 2
fail 7316229 2023-06-26 12:25:50 2023-06-26 12:26:40 2023-06-26 12:58:24 0:31:44 0:15:00 0:16:44 smithi main centos 8.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/pacific 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

Command failed on smithi110 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:pacific pull'

fail 7316230 2023-06-26 12:25:51 2023-06-26 12:26:41 2023-06-26 13:04:51 0:38:10 0:15:47 0:22:23 smithi main ubuntu 22.04 fs/multiclient/{begin/{0-install 1-ceph 2-logrotate} clusters/1-mds-2-client conf/{client mds mon osd} distros/ubuntu_latest mount/fuse objectstore-ec/bluestore-comp-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down} tasks/ior-shared-file} 4
Failure Reason:

Command failed on smithi202 with status 2: 'TESTDIR=/home/ubuntu/cephtest bash -s'

pass 7316231 2023-06-26 12:25:52 2023-06-26 12:26:41 2023-06-26 13:19:45 0:53:04 0:40:26 0:12:38 smithi main rhel 8.6 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/secure wsync/no} objectstore-ec/bluestore-comp-ec-root omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/no 5-workunit/fs/test_o_trunc}} 3