Name Machine Type Up Locked Locked Since Locked By OS Type OS Version Arch Description
smithi055.front.sepia.ceph.com smithi True True 2022-11-29 20:56:16.705570 scheduled_yuriw@teuthology rhel 8.4 x86_64 /home/teuthworker/archive/yuriw-2022-11-29_15:35:32-rados-wip-yuri3-testing-2022-11-28-0750-pacific-distro-default-smithi/7097126
Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
running 7097126 2022-11-29 15:44:42 2022-11-29 20:56:06 2022-11-30 00:02:12 3:07:46 smithi main rhel 8.4 rados/standalone/{supported-random-distro$/{rhel_8} workloads/osd} 1
pass 7097098 2022-11-29 15:44:07 2022-11-29 20:37:02 2022-11-29 20:56:14 0:19:12 0:09:52 0:09:20 smithi main ubuntu 20.04 rados/perf/{ceph mon_election/connectivity objectstore/bluestore-low-osd-mem-target openstack scheduler/wpq_default_shards settings/optimized ubuntu_latest workloads/fio_4K_rand_rw} 1
pass 7097070 2022-11-29 15:43:33 2022-11-29 20:15:18 2022-11-29 20:37:09 0:21:51 0:13:14 0:08:37 smithi main ubuntu 20.04 rados/perf/{ceph mon_election/classic objectstore/bluestore-comp openstack scheduler/dmclock_default_shards settings/optimized ubuntu_latest workloads/fio_4K_rand_read} 1
pass 7097017 2022-11-29 15:42:31 2022-11-29 19:33:23 2022-11-29 20:14:00 0:40:37 0:35:19 0:05:18 smithi main centos 8.stream rados/singleton-bluestore/{all/cephtool mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8}} 1
pass 7096935 2022-11-29 15:40:56 2022-11-29 18:07:07 2022-11-29 19:33:13 1:26:06 1:16:52 0:09:14 smithi main rhel 8.4 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-dispatch-delay msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{rhel_8} thrashers/mapgap thrashosds-health workloads/cache-agent-big} 2
pass 7096894 2022-11-29 15:40:06 2022-11-29 17:32:06 2022-11-29 18:09:51 0:37:45 0:31:27 0:06:18 smithi main rhel 8.4 rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few objectstore/bluestore-comp-lz4 rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{rhel_8} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} 2
pass 7096876 2022-11-29 15:39:44 2022-11-29 17:13:52 2022-11-29 17:33:04 0:19:12 0:13:38 0:05:34 smithi main rhel 8.4 rados/singleton/{all/resolve_stuck_peering mon_election/classic msgr-failures/none msgr/async-v2only objectstore/bluestore-hybrid rados supported-random-distro$/{rhel_8}} 2
pass 7096841 2022-11-29 15:39:03 2022-11-29 16:48:04 2022-11-29 17:14:15 0:26:11 0:17:48 0:08:23 smithi main centos 8.stream rados/cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-services/nfs-ingress 3-final} 2
pass 7096815 2022-11-29 15:38:32 2022-11-29 16:23:07 2022-11-29 16:49:33 0:26:26 0:15:05 0:11:21 smithi main centos 8.stream rados/cephadm/osds/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-ops/rm-zap-flag} 2
pass 7096606 2022-11-29 13:15:01 2022-11-29 13:28:03 2022-11-29 16:27:00 2:58:57 2:52:39 0:06:18 smithi main centos 8.stream rgw/tools/{centos_latest cluster tasks} 1
pass 7096476 2022-11-29 09:15:56 2022-11-29 13:02:42 2022-11-29 13:27:57 0:25:15 0:16:26 0:08:49 smithi main rhel 8.6 rbd/nbd/{base/install cluster/{fixed-3 openstack} msgr-failures/few objectstore/bluestore-comp-zstd supported-random-distro$/{rhel_8} thrashers/cache thrashosds-health workloads/rbd_nbd} 3
fail 7096433 2022-11-29 09:15:03 2022-11-29 12:02:40 2022-11-29 13:04:58 1:02:18 0:48:10 0:14:08 smithi main rhel 8.6 rbd/librbd/{cache/writearound clusters/{fixed-3 openstack} config/none min-compat-client/default msgr-failures/few objectstore/bluestore-hybrid pool/none supported-random-distro$/{rhel_8} workloads/c_api_tests} 3
Failure Reason:

"1669725059.3204458 mon.a (mon.0) 251 : cluster [WRN] Health check failed: Reduced data availability: 1 pg inactive, 1 pg peering (PG_AVAILABILITY)" in cluster log

pass 7096420 2022-11-29 09:14:47 2022-11-29 11:38:45 2022-11-29 12:09:42 0:30:57 0:20:27 0:10:30 smithi main ubuntu 20.04 rbd/librbd/{cache/writearound clusters/{fixed-3 openstack} config/copy-on-read min-compat-client/default msgr-failures/few objectstore/bluestore-comp-lz4 pool/none supported-random-distro$/{ubuntu_latest} workloads/python_api_tests} 3
pass 7096382 2022-11-29 09:14:01 2022-11-29 10:51:04 2022-11-29 11:40:06 0:49:02 0:26:53 0:22:09 smithi main rhel 8.6 rbd/librbd/{cache/none clusters/{fixed-3 openstack} config/copy-on-read min-compat-client/octopus msgr-failures/few objectstore/bluestore-comp-lz4 pool/ec-data-pool supported-random-distro$/{rhel_8} workloads/fsx} 3
pass 7096348 2022-11-29 09:13:19 2022-11-29 10:04:24 2022-11-29 11:07:05 1:02:41 0:47:41 0:15:00 smithi main centos 8.stream rbd/librbd/{cache/writeback clusters/{fixed-3 openstack} config/permit-partial-discard min-compat-client/octopus msgr-failures/few objectstore/bluestore-bitmap pool/replicated-data-pool supported-random-distro$/{centos_8} workloads/c_api_tests_with_defaults} 3
pass 7096325 2022-11-29 09:12:50 2022-11-29 09:30:36 2022-11-29 10:11:55 0:41:19 0:26:48 0:14:31 smithi main ubuntu 20.04 rbd/librbd/{cache/none clusters/{fixed-3 openstack} config/permit-partial-discard min-compat-client/octopus msgr-failures/few objectstore/bluestore-comp-zlib pool/ec-data-pool supported-random-distro$/{ubuntu_latest} workloads/fsx} 3
fail 7096255 2022-11-29 04:39:12 2022-11-29 04:40:10 2022-11-29 05:13:31 0:33:21 0:18:13 0:15:08 smithi main centos 8.stream fs:fscrypt/{begin/{0-install 1-ceph 2-logrotate} bluestore-bitmap centos_latest clusters/1-mds-1-client conf/{client mds mon osd} mount/kclient/{mount-syntax/v1 mount overrides/{distro/testing/k-testing}} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} tasks/fscrypt-ffsb} 3
Failure Reason:

Command failed on smithi171 with status 128: 'rm -rf /home/ubuntu/cephtest/clone.client.0 && git clone https://github.com/lxbsz/ceph.git /home/ubuntu/cephtest/clone.client.0 && cd /home/ubuntu/cephtest/clone.client.0 && git checkout 962785a1bf7f08cb714f3da6be75a5a78bb798f2'

fail 7096241 2022-11-29 00:53:47 2022-11-29 00:55:24 2022-11-29 01:39:07 0:43:43 0:25:36 0:18:07 smithi main centos 8.stream fs:fscrypt/{begin/{0-install 1-ceph 2-logrotate} bluestore-bitmap centos_latest clusters/1-mds-4-client conf/{client mds mon osd} mount/kclient/{mount-syntax/v1 mount overrides/{distro/testing/k-testing}} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} tasks/2-fscrypt-iozone} 6
Failure Reason:

Command failed (workunit test fs/fscrypt.sh) on smithi055 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=a46e96ee5cf5313bcc999ea0e4ad8103bf4955f6 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/fs/fscrypt.sh unlocked iozone'

fail 7095764 2022-11-28 21:26:48 2022-11-29 08:58:58 2022-11-29 09:34:36 0:35:38 0:12:36 0:23:02 smithi main ubuntu 20.04 rbd/migration/{1-base/install 2-clusters/{fixed-3 openstack} 3-objectstore/bluestore-hybrid 4-supported-random-distro$/{ubuntu_latest} 5-pool/ec-data-pool 6-prepare/qcow2-file 7-io-workloads/qemu_xfstests 8-migrate-workloads/execute 9-cleanup/cleanup} 3
Failure Reason:

Command failed on smithi119 with status 1: 'test -f /home/ubuntu/cephtest/archive/qemu/client.0/success'

pass 7095620 2022-11-28 21:25:10 2022-11-29 07:59:58 2022-11-29 09:11:03 1:11:05 1:02:26 0:08:39 smithi main rhel 8.4 rbd/mirror-thrash/{base/install clients/mirror cluster/{2-node openstack} msgr-failures/few objectstore/bluestore-comp-zstd policy/simple rbd-mirror/four-per-cluster supported-random-distro$/{rhel_8} workloads/rbd-mirror-journal-stress-workunit} 2