Name | Machine Type | Up | Locked | Locked Since | Locked By | OS Type | OS Version | Arch | Description |
---|---|---|---|---|---|---|---|---|---|
smithi111.front.sepia.ceph.com | smithi | True | True | 2023-09-06 14:41:09.214512 | scheduled_pdonnell@teuthology | centos | 8 | x86_64 | /home/teuthworker/archive/pdonnell-2023-09-06_13:54:24-fs-wip-batrick-testing-20230905.192950-distro-default-smithi/7390073 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 7390073 |
![]() ![]() |
2023-09-06 13:55:26 | 2023-09-06 14:40:29 | 2023-09-06 15:02:45 | 0:22:16 | 0:11:35 | 0:10:41 | smithi | main | centos | 8.stream | fs/upgrade/featureful_client/upgraded_client/{bluestore-bitmap centos_8 clusters/1-mds-2-client-micro conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down multimds/no pg-warn} tasks/{0-octopus 1-client 2-upgrade 3-client-upgrade 4-compat_client 5-client-sanity}} | 3 |
Failure Reason:
Command failed on smithi060 with status 22: "sudo ceph --cluster ceph config set client 'client mount timeout' 600" |
||||||||||||||
pass | 7390035 |
![]() |
2023-09-06 13:54:58 | 2023-09-06 14:05:48 | 2023-09-06 14:41:03 | 0:35:15 | 0:24:16 | 0:10:59 | smithi | main | ubuntu | 22.04 | fs/traceless/{begin/{0-install 1-ceph 2-logrotate} clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse objectstore-ec/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down} tasks/cfuse_workunit_suites_blogbench traceless/50pc} | 2 |
fail | 7389963 |
![]() ![]() |
2023-09-06 09:28:03 | 2023-09-06 11:10:58 | 2023-09-06 11:46:35 | 0:35:37 | 0:25:31 | 0:10:06 | smithi | main | centos | 8.stream | orch:cephadm/workunits/{0-distro/centos_8.stream_container_tools_crun agent/on mon_election/connectivity task/test_nfs} | 1 |
Failure Reason:
Test failure: test_cluster_set_reset_user_config (tasks.cephfs.test_nfs.TestNFS) |
||||||||||||||
pass | 7389934 |
![]() |
2023-09-06 09:27:41 | 2023-09-06 10:41:18 | 2023-09-06 11:11:08 | 0:29:50 | 0:16:48 | 0:13:02 | smithi | main | centos | 8.stream | orch:cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop 1-start 2-services/rgw 3-final} | 2 |
fail | 7389900 |
![]() ![]() |
2023-09-06 09:27:15 | 2023-09-06 10:14:02 | 2023-09-06 10:43:39 | 0:29:37 | 0:18:59 | 0:10:38 | smithi | main | centos | 8.stream | orch:cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-services/nfs-ingress2 3-final} | 2 |
Failure Reason:
Command failed on smithi002 with status 32: "sudo TESTDIR=/home/ubuntu/cephtest bash -ex -c 'mount -t nfs 10.0.31.2:/fake /mnt/foo -o port=2999'" |
||||||||||||||
pass | 7389868 |
![]() |
2023-09-06 09:26:49 | 2023-09-06 09:28:15 | 2023-09-06 10:14:07 | 0:45:52 | 0:32:20 | 0:13:32 | smithi | main | centos | 8.stream | orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 |
pass | 7389822 |
![]() |
2023-09-06 08:14:04 | 2023-09-06 08:25:13 | 2023-09-06 09:30:27 | 1:05:14 | 0:54:42 | 0:10:32 | smithi | main | centos | 9.stream | rbd/migration/{1-base/install 2-clusters/{fixed-3 openstack} 3-objectstore/bluestore-comp-snappy 4-supported-random-distro$/{centos_latest} 5-pool/replicated-data-pool 6-prepare/qcow2-file 7-io-workloads/qemu_xfstests 8-migrate-workloads/execute 9-cleanup/cleanup conf/{mgr}} | 3 |
pass | 7389757 |
![]() |
2023-09-06 06:31:27 | 2023-09-06 12:03:29 | 2023-09-06 14:06:48 | 2:03:19 | 1:55:08 | 0:08:11 | smithi | main | rhel | 8.6 | upgrade:quincy-x/stress-split/{0-distro/rhel_8.6_container_tools_rhel8 0-roles 1-start 2-first-half-tasks/rbd-cls 3-stress-tasks/{radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 4-second-half-tasks/rbd-import-export mon_election/connectivity} | 2 |
fail | 7389726 |
![]() |
2023-09-06 04:30:36 | 2023-09-06 05:22:04 | 2023-09-06 05:59:27 | 0:37:23 | 0:24:06 | 0:13:17 | smithi | main | centos | 9.stream | rados:thrash-erasure-code-crush-4-nodes/{arch/x86_64 ceph mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-bitmap rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{centos_latest} thrashers/default thrashosds-health workloads/ec-rados-plugin=jerasure-k=8-m=6-crush} | 4 |
Failure Reason:
"2023-09-06T05:52:16.112865+0000 osd.9 (osd.9) 9 : cluster [ERR] Error -2 reading object 3:09b2e764:::smithi14436213-156:88" in cluster log |
||||||||||||||
pass | 7389706 |
![]() |
2023-09-06 04:30:20 | 2023-09-06 04:51:12 | 2023-09-06 05:25:53 | 0:34:41 | 0:19:45 | 0:14:56 | smithi | main | ubuntu | 20.04 | rados:thrash-erasure-code-crush-4-nodes/{arch/x86_64 ceph mon_election/connectivity msgr-failures/osd-delay objectstore/bluestore-bitmap rados recovery-overrides/{more-active-recovery} supported-random-distro$/{ubuntu_20.04} thrashers/none thrashosds-health workloads/ec-rados-plugin=jerasure-k=2-m=2-crush} | 4 |
pass | 7389648 |
![]() |
2023-09-05 23:51:52 | 2023-09-06 03:04:17 | 2023-09-06 03:39:48 | 0:35:31 | 0:21:51 | 0:13:40 | smithi | main | ubuntu | 20.04 | rbd/librbd/{cache/writearound clusters/{fixed-3 openstack} conf/{mgr} extra-conf/none min-compat-client/octopus msgr-failures/few objectstore/bluestore-comp-snappy pool/small-cache-pool supported-random-distro$/{ubuntu_20.04} workloads/python_api_tests_with_journaling} | 3 |
pass | 7389620 |
![]() |
2023-09-05 23:51:33 | 2023-09-06 02:34:33 | 2023-09-06 03:07:13 | 0:32:40 | 0:21:46 | 0:10:54 | smithi | main | ubuntu | 20.04 | rbd/librbd/{cache/writearound clusters/{fixed-3 openstack} conf/{mgr} extra-conf/permit-partial-discard min-compat-client/octopus msgr-failures/few objectstore/bluestore-comp-zlib pool/small-cache-pool supported-random-distro$/{ubuntu_20.04} workloads/python_api_tests_with_journaling} | 3 |
pass | 7389540 |
![]() |
2023-09-05 23:50:35 | 2023-09-06 01:02:50 | 2023-09-06 02:34:35 | 1:31:45 | 1:19:11 | 0:12:34 | smithi | main | ubuntu | 20.04 | rbd/pwl-cache/tmpfs/{1-base/install 2-cluster/{fix-2 openstack} 3-supported-random-distro$/{ubuntu_20.04} 4-cache-path 5-cache-mode/ssd 6-cache-size/5G 7-workloads/qemu_xfstests conf/{mgr}} | 2 |
fail | 7389493 |
![]() |
2023-09-05 23:50:02 | 2023-09-06 00:31:08 | 2023-09-06 01:04:09 | 0:33:01 | 0:21:56 | 0:11:05 | smithi | main | ubuntu | 20.04 | rbd/librbd/{cache/writearound clusters/{fixed-3 openstack} conf/{mgr} extra-conf/copy-on-read min-compat-client/octopus msgr-failures/few objectstore/bluestore-stupid pool/ec-data-pool supported-random-distro$/{ubuntu_20.04} workloads/python_api_tests_with_journaling} | 3 |
Failure Reason:
Command failed on smithi111 with status 1: 'sudo journalctl -b0 | gzip -9 > /home/ubuntu/cephtest/archive/syslog/journalctl-b0.gz' |
||||||||||||||
pass | 7389435 |
![]() |
2023-09-05 23:49:22 | 2023-09-05 23:51:18 | 2023-09-06 00:31:48 | 0:40:30 | 0:26:35 | 0:13:55 | smithi | main | centos | 9.stream | rbd/librbd/{cache/writeback clusters/{fixed-3 openstack} conf/{mgr} extra-conf/copy-on-read min-compat-client/default msgr-failures/few objectstore/bluestore-comp-snappy pool/none supported-random-distro$/{centos_latest} workloads/rbd_fio} | 3 |
fail | 7389419 |
![]() ![]() |
2023-09-05 23:35:54 | 2023-09-06 11:46:18 | 2023-09-06 12:05:13 | 0:18:55 | 0:07:57 | 0:10:58 | smithi | main | ubuntu | 22.04 | fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/client-readahead} | 2 |
Failure Reason:
Command failed on smithi060 with status 100: 'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=18.0.0-5908-g69d81b88-1jammy ceph-mds=18.0.0-5908-g69d81b88-1jammy ceph-common=18.0.0-5908-g69d81b88-1jammy ceph-fuse=18.0.0-5908-g69d81b88-1jammy ceph-test=18.0.0-5908-g69d81b88-1jammy radosgw=18.0.0-5908-g69d81b88-1jammy python-ceph=18.0.0-5908-g69d81b88-1jammy libcephfs1=18.0.0-5908-g69d81b88-1jammy libcephfs-java=18.0.0-5908-g69d81b88-1jammy libcephfs-jni=18.0.0-5908-g69d81b88-1jammy librados2=18.0.0-5908-g69d81b88-1jammy librbd1=18.0.0-5908-g69d81b88-1jammy rbd-fuse=18.0.0-5908-g69d81b88-1jammy python3-cephfs=18.0.0-5908-g69d81b88-1jammy cephfs-shell=18.0.0-5908-g69d81b88-1jammy cephfs-top=18.0.0-5908-g69d81b88-1jammy cephfs-mirror=18.0.0-5908-g69d81b88-1jammy' |
||||||||||||||
fail | 7389401 |
![]() |
2023-09-05 23:35:40 | 2023-09-06 08:03:46 | 2023-09-06 08:25:08 | 0:21:22 | 0:10:29 | 0:10:53 | smithi | main | centos | 9.stream | fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{centos_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/auto-repair} | 2 |
Failure Reason:
"2023-09-06T08:20:01.113118+0000 mon.a (mon.0) 162 : cluster [WRN] Health check failed: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log |
||||||||||||||
fail | 7389335 |
![]() ![]() |
2023-09-05 23:34:49 | 2023-09-06 06:58:10 | 2023-09-06 08:03:48 | 1:05:38 | 0:55:23 | 0:10:15 | smithi | main | rhel | 8.6 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/crc wsync/yes} objectstore-ec/bluestore-comp omap_limit/10000 overrides/{cephsqlite-timeout ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/3 replication/default} standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/no 5-workunit/suites/dbench}} | 3 |
Failure Reason:
Command failed (workunit test suites/dbench.sh) on smithi088 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=69d81b88787bd100f82fd1563172636dd8e08711 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/dbench.sh' |
||||||||||||||
fail | 7389312 |
![]() ![]() |
2023-09-05 23:34:31 | 2023-09-06 06:37:02 | 2023-09-06 06:58:10 | 0:21:08 | 0:08:23 | 0:12:45 | smithi | main | centos | 9.stream | fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{centos_latest} mount/fuse objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/pool-perm} | 2 |
Failure Reason:
Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm) |
||||||||||||||
pass | 7389267 |
![]() |
2023-09-05 23:33:56 | 2023-09-06 05:58:08 | 2023-09-06 06:38:49 | 0:40:41 | 0:30:30 | 0:10:11 | smithi | main | centos | 8.stream | fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/pacific 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 |