Name Machine Type Up Locked Locked Since Locked By OS Type OS Version Arch Description
gibba017.front.sepia.ceph.com gibba True True 2021-12-02 19:33:15.987769 sage@teuthology centos 8 x86_64 None
Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 6538814 2021-12-02 05:01:31 2021-12-02 05:41:02 2021-12-02 07:52:04 2:11:02 2:00:18 0:10:44 gibba master centos 8.stream smoke/basic/{clusters/{fixed-3-cephfs openstack} objectstore/bluestore-bitmap supported-random-distro$/{centos_8.stream} tasks/{0-install test/rados_bench}} 3
pass 6538803 2021-12-02 05:01:22 2021-12-02 05:04:19 2021-12-02 05:42:56 0:38:37 0:28:21 0:10:16 gibba master rhel 8.4 smoke/basic/{clusters/{fixed-3-cephfs openstack} objectstore/bluestore-bitmap supported-random-distro$/{rhel_8} tasks/{0-install test/cfuse_workunit_suites_blogbench}} 3
dead 6538268 2021-12-01 11:16:47 2021-12-02 19:25:07 2021-12-02 19:40:12 0:15:05 gibba master ubuntu 20.04 powercycle/osd/{clusters/3osd-1per-target objectstore/bluestore-comp-zlib powercycle/default supported-all-distro/ubuntu_latest tasks/cfuse_workunit_suites_truncate_delay thrashosds-health whitelist_health} 4
Failure Reason:

Error reimaging machines: reached maximum tries (60) after waiting for 900 seconds

pass 6538221 2021-12-01 10:15:47 2021-12-01 16:51:10 2021-12-01 17:37:56 0:46:46 0:34:09 0:12:37 gibba master ubuntu 20.04 rados:thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/classic msgr-failures/osd-dispatch-delay msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/pool-snaps-few-objects} 2
pass 6538210 2021-12-01 10:15:38 2021-12-01 16:24:50 2021-12-01 16:54:07 0:29:17 0:20:17 0:09:00 gibba master rhel 8.4 rados:thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{default} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/connectivity msgr-failures/fastclose msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{rhel_8} thrashers/mapgap thrashosds-health workloads/write_fadvise_dontneed} 2
pass 6538201 2021-12-01 10:15:30 2021-12-01 16:01:21 2021-12-01 16:27:09 0:25:48 0:16:06 0:09:42 gibba master ubuntu 20.04 rados:thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{default} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/classic msgr-failures/osd-dispatch-delay msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/redirect_promote_tests} 2
fail 6534941 2021-11-30 08:43:46 2021-12-01 14:32:16 2021-12-01 16:01:11 1:28:55 1:19:02 0:09:53 gibba master centos 8.0 rgw/verify/{0-install centos_latest clusters/fixed-2 datacache/no_datacache frontend/beast ignore-pg-availability msgr-failures/few objectstore/filestore-xfs overrides proto/https rgw_pool_type/ec-profile sharding$/{default} striping$/{stripe-greater-than-chunk} tasks/{cls ragweed reshard s3tests-java s3tests} validater/valgrind} 2
Failure Reason:

saw valgrind issues

dead 6534020 2021-11-30 03:16:49 2021-12-02 18:58:59 2021-12-02 19:24:43 0:25:44 gibba master rhel 8.4 fs/thrash/workloads/{begin clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp overrides/{frag session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/5 tasks/{1-thrash/mds 2-workunit/suites/ffsb}} 2
pass 6534011 2021-11-30 03:16:43 2021-12-02 18:42:43 2021-12-02 19:05:47 0:23:04 0:12:16 0:10:48 gibba master centos 8.3 fs/functional/{begin clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount overrides/{distro/testing/{flavor/centos_latest k-testing} ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/openfiletable} 2
pass 6534002 2021-11-30 03:16:37 2021-12-02 18:17:36 2021-12-02 18:42:50 0:25:14 0:17:38 0:07:36 gibba master rhel 8.4 fs/thrash/workloads/{begin clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp overrides/{frag session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/mds 2-workunit/fs/trivial_sync}} 2
pass 6533983 2021-11-30 03:16:23 2021-12-02 17:22:01 2021-12-02 18:19:48 0:57:47 0:46:31 0:11:16 gibba master centos 8.stream fs/thrash/multifs/{begin clusters/1a3s-mds-2c-client conf/{client mds mon osd} distro/{centos_8.stream} mount/fuse msgr-failures/none objectstore/bluestore-bitmap overrides/{frag multifs session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} tasks/{1-thrash/mon 2-workunit/cfuse_workunit_snaptests}} 2
pass 6533966 2021-11-30 03:16:11 2021-12-02 16:54:32 2021-12-02 17:24:47 0:30:15 0:17:08 0:13:07 gibba master centos 8.3 fs/workload/{begin clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} distro/{centos_8.stream} mount/kclient/{mount overrides/{distro/testing/{flavor/centos_latest k-testing} ms-die-on-skipped}} ms_mode/{legacy} objectstore-ec/bluestore-comp-ec-root omap_limit/10 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 scrub/yes standby-replay tasks/{0-check-counter workunit/suites/iozone} wsync/{no}} 3
pass 6533014 2021-11-29 05:06:41 2021-12-02 16:33:02 2021-12-02 16:53:34 0:20:32 0:10:18 0:10:14 gibba master ubuntu rgw/thrash/{civetweb clusters/fixed-2 install objectstore/bluestore-bitmap thrasher/default thrashosds-health workload/rgw_user_quota} 2
fail 6532995 2021-11-29 05:06:29 2021-12-02 16:05:12 2021-12-02 16:33:48 0:28:36 0:17:23 0:11:13 gibba master centos 8.0 rgw/verify/{centos_latest clusters/fixed-2 frontend/beast msgr-failures/few objectstore/filestore-xfs overrides proto/http rgw_pool_type/ec-profile sharding$/{default} striping$/{stripe-greater-than-chunk} tasks/{0-install cls ragweed s3tests-java s3tests} validater/valgrind} 2
Failure Reason:

Command failed on gibba017 with status 1: 'cd /home/ubuntu/cephtest/s3-tests-java && /opt/gradle/gradle/bin/gradle clean test --rerun-tasks --no-build-cache --tests ObjectTest'

fail 6532984 2021-11-29 05:06:21 2021-12-02 15:48:16 2021-12-02 16:07:16 0:19:00 0:06:07 0:12:53 gibba master rgw/multifs/{clusters/fixed-2 frontend/civetweb objectstore/filestore-xfs overrides rgw_pool_type/replicated tasks/rgw_ragweed} 2
Failure Reason:

Command failed on gibba017 with status 2: 'cd /home/ubuntu/cephtest/ragweed && ./bootstrap'

pass 6532971 2021-11-29 05:06:12 2021-12-02 15:31:19 2021-12-02 15:51:14 0:19:55 0:09:11 0:10:44 gibba master rgw/crypt/{0-cluster/fixed-1 1-ceph-install/install 2-kms/vault_kv 3-rgw/rgw 4-tests/{s3tests}} 1
pass 6532947 2021-11-29 05:05:56 2021-12-02 14:57:43 2021-12-02 15:20:02 0:22:19 0:09:21 0:12:58 gibba master rgw/crypt/{0-cluster/fixed-1 1-ceph-install/install 2-kms/testing 3-rgw/rgw 4-tests/{s3tests}} 1
fail 6532932 2021-11-29 05:05:46 2021-12-02 14:34:36 2021-12-02 15:00:05 0:25:29 0:16:34 0:08:55 gibba master centos 8.3 rgw/verify/{centos_latest clusters/fixed-2 frontend/beast msgr-failures/few objectstore/bluestore-bitmap overrides proto/http rgw_pool_type/ec sharding$/{single} striping$/{stripe-greater-than-chunk} tasks/{0-install cls ragweed s3tests-java s3tests} validater/lockdep} 2
Failure Reason:

Command failed on gibba017 with status 1: 'cd /home/ubuntu/cephtest/s3-tests-java && /opt/gradle/gradle/bin/gradle clean test --rerun-tasks --no-build-cache --tests ObjectTest'

fail 6532908 2021-11-29 02:05:17 2021-12-02 14:04:44 2021-12-02 14:34:40 0:29:56 0:16:10 0:13:46 gibba master ubuntu 20.04 rbd/qemu/{cache/none clusters/{fixed-3 openstack} features/defaults msgr-failures/few objectstore/bluestore-hybrid pool/none supported-random-distro$/{ubuntu_latest} workloads/qemu_fsstress} 3
Failure Reason:

"2021-12-02T14:23:29.307030+0000 mon.a (mon.0) 114 : cluster [WRN] Health check failed: Degraded data redundancy: 2/4 objects degraded (50.000%), 1 pg degraded (PG_DEGRADED)" in cluster log

pass 6532896 2021-11-29 02:05:08 2021-12-02 13:45:18 2021-12-02 14:08:41 0:23:23 0:14:25 0:08:58 gibba master centos 8.stream rbd/thrash/{base/install clusters/{fixed-2 openstack} msgr-failures/few objectstore/filestore-xfs supported-random-distro$/{centos_8.stream} thrashers/cache thrashosds-health workloads/rbd_fsx_nocache} 2