Name Machine Type Up Locked Locked Since Locked By OS Type OS Version Arch Description
smithi082.front.sepia.ceph.com smithi True True 2021-10-19 08:49:06.762532 scheduled_yuriw@teuthology ubuntu 18.04 x86_64 /home/teuthworker/archive/yuriw-2021-10-18_19:03:43-rados-wip-yuri5-testing-2021-10-18-0906-octopus-distro-basic-smithi/6449454
Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 6450894 2021-10-19 07:45:46 2021-10-19 08:02:54 2021-10-19 08:09:49 0:06:55 smithi master centos 8.3 rgw/multisite/{clusters frontend/beast ignore-pg-availability omap_limits overrides realms/two-zonegroup supported-random-distro$/{centos_8} tasks/test_multi} 2
Failure Reason:

Failed to fetch package version from https://shaman.ceph.com/api/search/?status=ready&project=ceph&flavor=default&distros=centos%2F8%2Fx86_64&ref=wip-rgw-error-check-teuth

fail 6450875 2021-10-19 07:45:30 2021-10-19 07:56:03 2021-10-19 08:03:03 0:07:00 smithi master centos 8.3 rgw/verify/{0-install centos_latest clusters/fixed-2 datacache/rgw-datacache frontend/beast ignore-pg-availability msgr-failures/few objectstore/bluestore-bitmap overrides proto/http rgw_pool_type/ec sharding$/{single} striping$/{stripe-greater-than-chunk} tasks/{cls ragweed reshard s3tests-java s3tests} validater/lockdep} 2
Failure Reason:

Failed to fetch package version from https://shaman.ceph.com/api/search/?status=ready&project=ceph&flavor=default&distros=centos%2F8%2Fx86_64&ref=wip-rgw-error-check-teuth

pass 6450531 2021-10-19 04:36:00 2021-10-19 06:57:10 2021-10-19 07:56:03 0:58:53 0:47:18 0:11:35 smithi master centos 8.3 fs/workload/{begin clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount overrides/{distro/testing/{flavor/centos_latest k-testing} ms-die-on-skipped}} ms_mode/{legacy} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/no standby-replay tasks/{0-check-counter workunit/suites/blogbench} wsync/{yes}} 3
fail 6450402 2021-10-19 04:33:46 2021-10-19 05:23:47 2021-10-19 06:57:09 1:33:22 1:19:31 0:13:51 smithi master ubuntu 20.04 fs/workload/{begin clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse ms_mode/{secure} objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 scrub/yes standby-replay tasks/{0-check-counter workunit/kernel_untar_build} wsync/{no}} 3
Failure Reason:

Command failed (workunit test kernel_untar_build.sh) on smithi082 with status 2: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=aa27813273975daa5186efc6d68bebe0a3ec8b20 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/kernel_untar_build.sh'

pass 6450376 2021-10-19 04:33:18 2021-10-19 05:01:21 2021-10-19 05:25:26 0:24:05 0:16:24 0:07:41 smithi master rhel 8.4 fs/workload/{begin clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/{crc} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/1 scrub/no standby-replay tasks/{0-check-counter workunit/suites/fsync-tester} wsync/{yes}} 3
pass 6450336 2021-10-19 04:32:38 2021-10-19 04:38:02 2021-10-19 05:01:46 0:23:44 0:17:18 0:06:26 smithi master rhel 8.4 fs/32bits/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{rhel_8} mount/fuse objectstore-ec/bluestore-bitmap overrides/{faked-ino whitelist_health whitelist_wrongly_marked_down} tasks/cfuse_workunit_suites_fsstress} 2
pass 6449770 2021-10-19 00:32:12 2021-10-19 01:00:05 2021-10-19 02:18:32 1:18:27 1:08:53 0:09:34 smithi master ubuntu 20.04 upgrade:octopus-x/parallel/{0-distro/ubuntu_20.04 0-start 1-tasks mon_election/classic upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api}} 2
fail 6449749 2021-10-18 21:44:40 2021-10-18 21:53:41 2021-10-18 22:17:42 0:24:01 0:16:50 0:07:11 smithi master rhel 8.4 orch:cephadm:osds/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop 1-start 2-ops/rm-zap-wait} 2
Failure Reason:

Command failed on smithi019 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:b53c9ab78265f6dad241b4b05aa87603f7e66e27 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid ee45d38e-305f-11ec-8c28-001a4aab830c -- bash -c \'set -e\nset -x\nceph orch ps\nceph orch device ls\nDEVID=$(ceph device ls | grep osd.1 | awk \'"\'"\'{print $1}\'"\'"\')\nHOST=$(ceph orch device ls | grep $DEVID | awk \'"\'"\'{print $1}\'"\'"\')\nDEV=$(ceph orch device ls | grep $DEVID | awk \'"\'"\'{print $2}\'"\'"\')\necho "host $HOST, dev $DEV, devid $DEVID"\nceph orch osd rm 1\nwhile ceph orch osd rm status | grep ^1 ; do sleep 5 ; done\nceph orch device zap $HOST $DEV --force\nwhile ! ceph osd dump | grep osd.1 | grep up ; do sleep 5 ; done\n\''

pass 6449615 2021-10-18 19:10:42 2021-10-19 02:18:09 2021-10-19 03:19:36 1:01:27 0:51:46 0:09:41 smithi master centos 8.2 rados/cephadm/thrash/{0-distro/centos_8.2_container_tools_3.0 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async root} 2
pass 6449479 2021-10-18 19:08:52 2021-10-19 00:22:03 2021-10-19 01:00:04 0:38:01 0:28:13 0:09:48 smithi master centos 8.2 rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/fastclose objectstore/bluestore-stupid rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/ec-rados-plugin=jerasure-k=3-m=1} 2
running 6449454 2021-10-18 19:08:39 2021-10-19 08:49:06 2021-10-19 09:03:12 0:15:48 smithi master ubuntu 18.04 rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/fast msgr-failures/few rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{ubuntu_18.04} thrashers/fastread thrashosds-health workloads/ec-small-objects-overwrites} 2
pass 6449427 2021-10-18 19:08:26 2021-10-18 23:49:54 2021-10-19 00:21:55 0:32:01 0:25:25 0:06:36 smithi master ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/osd-delay rados thrashers/morepggrow thrashosds-health workloads/test_rbd_api} 3
pass 6449377 2021-10-18 19:08:02 2021-10-18 23:17:52 2021-10-18 23:49:53 0:32:01 0:22:36 0:09:25 smithi master centos 8.2 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v1only objectstore/filestore-xfs rados tasks/rados_api_tests validater/lockdep} 2
pass 6449345 2021-10-18 19:07:47 2021-10-18 22:55:55 2021-10-18 23:17:55 0:22:00 0:10:49 0:11:11 smithi master ubuntu 20.04 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{default} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{ubuntu_latest} thrashers/none thrashosds-health workloads/redirect_set_object} 2
pass 6449328 2021-10-18 19:07:38 2021-10-19 08:08:40 2021-10-19 08:49:01 0:40:21 0:28:49 0:11:32 smithi master ubuntu 20.04 rados/monthrash/{ceph clusters/3-mons msgr-failures/mon-delay msgr/async objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_latest} thrashers/many workloads/rados_mon_workunits} 2
pass 6449294 2021-10-18 19:07:21 2021-10-18 22:15:48 2021-10-18 22:55:48 0:40:00 0:30:47 0:09:13 smithi master centos 8.2 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-stupid rados tasks/mon_recovery validater/valgrind} 2
pass 6449244 2021-10-18 19:06:57 2021-10-18 21:13:42 2021-10-18 21:53:44 0:40:02 0:34:35 0:05:27 smithi master rhel 8.4 rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/few objectstore/bluestore-comp-zstd rados recovery-overrides/{default} supported-random-distro$/{rhel_8} thrashers/morepggrow thrashosds-health workloads/ec-small-objects} 2
pass 6449205 2021-10-18 19:06:38 2021-10-18 20:43:39 2021-10-18 21:13:41 0:30:02 0:23:58 0:06:04 smithi master rhel 8.4 rados/singleton-nomsgr/{all/librados_hello_world mon_election/connectivity rados supported-random-distro$/{rhel_8}} 1
pass 6449185 2021-10-18 19:06:29 2021-10-18 19:59:34 2021-10-18 20:43:37 0:44:03 0:34:07 0:09:56 smithi master centos 8.3 rados/cephadm/with-work/{0-distro/centos_8.3_container_tools_3.0 fixed-2 mode/packaged mon_election/connectivity msgr/async-v2only start tasks/rados_api_tests} 2
fail 6449144 2021-10-18 19:06:08 2021-10-18 19:16:54 2021-10-18 19:38:55 0:22:01 0:15:46 0:06:15 smithi master rhel 8.4 orch:cephadm:osds/{0-distro/rhel_8.4_container_tools_3.0 0-nvme-loop 1-start 2-ops/rm-zap-wait} 2
Failure Reason:

Command failed on smithi103 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:ba88d0a8c1dd128d864cff753c73d2d788e27a57 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 11b7db02-304a-11ec-8c28-001a4aab830c -- bash -c \'set -e\nset -x\nceph orch ps\nceph orch device ls\nDEVID=$(ceph device ls | grep osd.1 | awk \'"\'"\'{print $1}\'"\'"\')\nHOST=$(ceph orch device ls | grep $DEVID | awk \'"\'"\'{print $1}\'"\'"\')\nDEV=$(ceph orch device ls | grep $DEVID | awk \'"\'"\'{print $2}\'"\'"\')\necho "host $HOST, dev $DEV, devid $DEVID"\nceph orch osd rm 1\nwhile ceph orch osd rm status | grep ^1 ; do sleep 5 ; done\nceph orch device zap $HOST $DEV --force\nwhile ! ceph osd dump | grep osd.1 | grep up ; do sleep 5 ; done\n\''