Name Machine Type Up Locked Locked Since Locked By OS Type OS Version Arch Description
smithi175.front.sepia.ceph.com smithi True True 2024-05-08 12:21:02.813912 scheduled_vshankar@teuthology ubuntu 22.04 x86_64 /home/teuthworker/archive/vshankar-2024-05-07_03:44:24-fs-wip-vshankar-testing-20240506.153513-testing-default-smithi/7695272
Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 7697381 2024-05-08 05:26:07 2024-05-08 06:55:33 2024-05-08 07:11:28 0:15:55 0:05:11 0:10:44 smithi main centos 9.stream orch:cephadm/smb/{0-distro/centos_9.stream tasks/deploy_smb_mgr_res_dom} 2
Failure Reason:

Command failed on smithi112 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:7c8f650b36e258f639fa4a83becade57cbfd2009 pull'

fail 7697326 2024-05-08 05:25:13 2024-05-08 06:23:16 2024-05-08 06:51:35 0:28:19 0:17:04 0:11:15 smithi main centos 9.stream orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/reef/{v18.2.0} 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client/kclient 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

Command failed on smithi097 with status 22: "sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid bcb4270a-0d05-11ef-bc97-c7b262605968 -e sha1=7c8f650b36e258f639fa4a83becade57cbfd2009 -- bash -c 'ceph orch upgrade start --image quay.ceph.io/ceph-ci/ceph:$sha1 --daemon-types mgr'"

fail 7696823 2024-05-08 00:45:25 2024-05-08 03:51:43 2024-05-08 06:17:20 2:25:37 2:10:00 0:15:37 smithi main ubuntu 22.04 rados/upgrade/parallel/{0-random-distro$/{ubuntu_22.04} 0-start 1-tasks mon_election/classic upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} 2
Failure Reason:

"2024-05-08T04:39:28.063397+0000 mon.a (mon.0) 396 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log

pass 7696784 2024-05-07 23:05:06 2024-05-08 03:17:53 2024-05-08 03:53:34 0:35:41 0:26:03 0:09:38 smithi main ubuntu 22.04 rbd/singleton/{all/qos conf/{disable-pool-app} objectstore/bluestore-stupid openstack supported-random-distro$/{ubuntu_latest}} 1
pass 7696741 2024-05-07 23:04:31 2024-05-08 02:15:41 2024-05-08 03:17:50 1:02:09 0:51:48 0:10:21 smithi main ubuntu 22.04 rbd/librbd/{cache/writearound clusters/{fixed-3 openstack} conf/{disable-pool-app} data-pool/replicated extra-conf/copy-on-read min-compat-client/default msgr-failures/few objectstore/bluestore-comp-snappy supported-random-distro$/{ubuntu_latest} workloads/c_api_tests} 3
pass 7696634 2024-05-07 23:03:05 2024-05-08 00:17:30 2024-05-08 02:17:23 1:59:53 1:47:00 0:12:53 smithi main centos 9.stream rbd/encryption/{cache/none clusters/{fixed-3 openstack} conf/{disable-pool-app} data-pool/replicated features/defaults msgr-failures/few objectstore/bluestore-bitmap supported-random-distro$/{centos_latest} workloads/qemu_xfstests_luks1_luks1} 3
pass 7696602 2024-05-07 23:02:39 2024-05-07 23:48:06 2024-05-08 00:17:23 0:29:17 0:19:52 0:09:25 smithi main centos 9.stream rbd/librbd/{cache/writeback clusters/{fixed-3 openstack} conf/{disable-pool-app} data-pool/replicated extra-conf/copy-on-read min-compat-client/octopus msgr-failures/few objectstore/bluestore-comp-zlib supported-random-distro$/{centos_latest} workloads/python_api_tests_with_defaults} 3
pass 7695925 2024-05-07 21:18:42 2024-05-08 07:11:06 2024-05-08 10:11:12 3:00:06 2:47:57 0:12:09 smithi main ubuntu 22.04 rbd/migration/{1-base/install 2-clusters/{fixed-3 openstack} 3-objectstore/bluestore-stupid 4-supported-random-distro$/{ubuntu_latest} 5-data-pool/ec 6-prepare/qcow2-http 7-io-workloads/qemu_xfstests 8-migrate-workloads/execute 9-cleanup/cleanup conf/{disable-pool-app}} 3
pass 7695848 2024-05-07 21:17:26 2024-05-07 22:07:38 2024-05-07 23:48:11 1:40:33 1:28:14 0:12:19 smithi main centos 9.stream rbd/maintenance/{base/install clusters/{fixed-3 openstack} conf/{disable-pool-app} objectstore/bluestore-hybrid qemu/xfstests supported-random-distro$/{centos_latest} workloads/dynamic_features_no_cache} 3
pass 7695435 2024-05-07 13:33:57 2024-05-07 13:43:28 2024-05-07 15:50:01 2:06:33 1:55:55 0:10:38 smithi main ubuntu 22.04 rgw/verify/{0-install accounts$/{none} clusters/fixed-2 datacache/rgw-datacache frontend/beast ignore-pg-availability inline-data$/{off} msgr-failures/few objectstore/bluestore-bitmap overrides proto/https rgw_pool_type/ec s3tests-branch sharding$/{single} striping$/{stripe-greater-than-chunk} supported-random-distro$/{ubuntu_latest} tasks/{bucket-check cls mp_reupload ragweed reshard s3tests-java s3tests versioning} validater/valgrind} 2
pass 7695393 2024-05-07 05:01:39 2024-05-07 11:29:48 2024-05-07 11:52:35 0:22:47 0:11:23 0:11:24 smithi main centos 9.stream smoke/basic/{clusters/{fixed-3-cephfs openstack} objectstore/bluestore-bitmap supported-random-distro$/{centos_latest} tasks/{0-install test/kclient_workunit_suites_pjd}} 3
running 7695272 2024-05-07 03:46:35 2024-05-08 12:21:02 2024-05-08 19:53:52 7:34:34 smithi main ubuntu 22.04 fs/functional/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a3s-mds-4c-client conf/{client mds mgr mon osd} distro/{ubuntu_latest} mount/fuse objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile pg_health} subvol_versions/create_subvol_version_v2 tasks/xfstests-dev} 2
fail 7695238 2024-05-07 03:46:07 2024-05-08 12:01:00 2024-05-08 12:20:37 0:19:37 0:08:33 0:11:04 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/fuse objectstore-ec/bluestore-comp omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/random export-check n/3 replication/default} standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/no 3-snaps/no 4-flush/yes 5-quiesce/no 6-workunit/kernel_untar_build}} 3
Failure Reason:

Command failed on smithi012 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:8e169ddeb2c550feeba2d26322b0465f6a1bd99d pull'

fail 7695094 2024-05-07 01:14:08 2024-05-07 01:30:15 2024-05-07 02:19:21 0:49:06 0:39:37 0:09:29 smithi main centos 9.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/reef/{v18.2.1} 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client/fuse 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

reached maximum tries (51) after waiting for 300 seconds

fail 7694861 2024-05-06 21:32:49 2024-05-07 10:54:48 2024-05-07 11:19:42 0:24:54 0:13:19 0:11:35 smithi main centos 9.stream powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-comp-lz4 powercycle/default supported-distros/centos_latest tasks/cfuse_workunit_suites_fsx thrashosds-health} 4
Failure Reason:

Command failed (workunit test suites/fsx.sh) on smithi062 with status 2: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=510b35d4b766757579df5fd9efd2f892309f09e1 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/fsx.sh'

pass 7694842 2024-05-06 21:32:30 2024-05-07 10:29:36 2024-05-07 10:56:08 0:26:32 0:16:42 0:09:50 smithi main ubuntu 22.04 powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-low-osd-mem-target powercycle/default supported-distros/ubuntu_latest tasks/admin_socket_objecter_requests thrashosds-health} 4
fail 7694757 2024-05-06 21:10:40 2024-05-07 09:36:39 2024-05-07 10:20:18 0:43:39 0:32:22 0:11:17 smithi main centos 9.stream orch/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/quincy 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client/kclient 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

"2024-05-07T10:10:00.000282+0000 mon.smithi071 (mon.0) 462 : cluster 3 [WRN] PG_DEGRADED: Degraded data redundancy: 61/360 objects degraded (16.944%), 16 pgs degraded" in cluster log

pass 7694728 2024-05-06 21:10:10 2024-05-07 09:12:15 2024-05-07 09:37:16 0:25:01 0:15:22 0:09:39 smithi main centos 9.stream orch/cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/rgw 3-final} 2
fail 7694649 2024-05-06 21:09:11 2024-05-07 08:31:17 2024-05-07 09:07:42 0:36:25 0:25:11 0:11:14 smithi main centos 9.stream orch/cephadm/rbd_iscsi/{0-single-container-host base/install cluster/{fixed-3 openstack} conf/{disable-pool-app} workloads/cephadm_iscsi} 3
Failure Reason:

"2024-05-07T08:48:25.169171+0000 mon.a (mon.0) 209 : cluster 3 [WRN] MON_DOWN: 1/3 mons down, quorum a,c" in cluster log

pass 7694623 2024-05-06 21:08:49 2024-05-07 08:10:16 2024-05-07 08:31:11 0:20:55 0:11:32 0:09:23 smithi main centos 9.stream orch/cephadm/smoke-small/{0-distro/centos_9.stream_runc 0-nvme-loop agent/off fixed-2 mon_election/connectivity start} 3