Name Machine Type Up Locked Locked Since Locked By OS Type OS Version Arch Description
gibba001.front.sepia.ceph.com gibba True True 2021-12-02 19:45:30.800797 sage@teuthology centos 8 x86_64 None
Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
dead 6538971 2021-12-02 06:01:02 2021-12-02 19:25:09 2021-12-02 19:40:14 0:15:05 gibba master ubuntu 18.04 rbd/qemu/{cache/writethrough clusters/{fixed-3 openstack} features/readbalance msgr-failures/few objectstore/bluestore-comp-zstd pool/small-cache-pool supported-random-distro$/{ubuntu_18.04} workloads/qemu_fsstress} 3
Failure Reason:

Error reimaging machines: reached maximum tries (60) after waiting for 900 seconds

pass 6538229 2021-12-01 10:15:54 2021-12-01 17:06:57 2021-12-01 17:37:11 0:30:14 0:19:04 0:11:10 gibba master ubuntu 20.04 rados:thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/classic msgr-failures/osd-dispatch-delay msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{ubuntu_latest} thrashers/morepggrow thrashosds-health workloads/small-objects-balanced} 2
pass 6538209 2021-12-01 10:15:37 2021-12-01 16:22:49 2021-12-01 17:07:16 0:44:27 0:37:08 0:07:19 gibba master rhel 8.4 rados:thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/classic msgr-failures/osd-dispatch-delay msgr/async objectstore/filestore-xfs rados supported-random-distro$/{rhel_8} thrashers/default thrashosds-health workloads/snaps-few-objects} 2
fail 6534979 2021-11-30 08:44:14 2021-12-01 15:55:16 2021-12-01 16:24:40 0:29:24 0:18:11 0:11:13 gibba master centos 8.3 rgw/verify/{0-install centos_latest clusters/fixed-2 datacache/rgw-datacache frontend/beast ignore-pg-availability msgr-failures/few objectstore/filestore-xfs overrides proto/https rgw_pool_type/ec sharding$/{single} striping$/{stripe-greater-than-chunk} tasks/{cls ragweed reshard s3tests-java s3tests} validater/lockdep} 2
Failure Reason:

reached maximum tries (90) after waiting for 540 seconds

fail 6534938 2021-11-30 08:43:43 2021-12-01 14:32:14 2021-12-01 15:57:30 1:25:16 1:14:47 0:10:29 gibba master centos 8.0 rgw/verify/{0-install centos_latest clusters/fixed-2 datacache/no_datacache frontend/beast ignore-pg-availability msgr-failures/few objectstore/filestore-xfs overrides proto/https rgw_pool_type/ec sharding$/{single} striping$/{stripe-greater-than-chunk} tasks/{cls ragweed reshard s3tests-java s3tests} validater/valgrind} 2
Failure Reason:

saw valgrind issues

dead 6533995 2021-11-30 03:16:32 2021-12-02 17:46:49 2021-12-02 19:24:10 1:37:21 gibba master ubuntu 20.04 fs/thrash/workloads/{begin clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8.stream} mount/kclient/{mount overrides/{distro/testing/{flavor/ubuntu_latest k-testing} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-bitmap overrides/{frag session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/5 tasks/{1-thrash/mon 2-workunit/suites/ffsb}} 2
pass 6533984 2021-11-30 03:16:24 2021-12-02 17:24:52 2021-12-02 17:43:26 0:18:34 0:09:24 0:09:10 gibba master centos 8.stream fs/top/{begin cluster/{1-node} mount/fuse objectstore/bluestore-bitmap overrides/whitelist_health supported-random-distros$/{centos_8.stream} tasks/fstop} 1
pass 6533966 2021-11-30 03:16:11 2021-12-02 16:54:32 2021-12-02 17:24:47 0:30:15 0:17:08 0:13:07 gibba master centos 8.3 fs/workload/{begin clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} distro/{centos_8.stream} mount/kclient/{mount overrides/{distro/testing/{flavor/centos_latest k-testing} ms-die-on-skipped}} ms_mode/{legacy} objectstore-ec/bluestore-comp-ec-root omap_limit/10 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 scrub/yes standby-replay tasks/{0-check-counter workunit/suites/iozone} wsync/{no}} 3
pass 6533022 2021-11-29 05:06:47 2021-12-02 16:41:55 2021-12-02 16:57:51 0:15:56 0:06:23 0:09:33 gibba master ubuntu rgw/multifs/{clusters/fixed-2 frontend/civetweb objectstore/filestore-xfs overrides rgw_pool_type/replicated tasks/rgw_user_quota} 2
fail 6532997 2021-11-29 05:06:30 2021-12-02 16:08:33 2021-12-02 16:41:58 0:33:25 0:20:20 0:13:05 gibba master centos 8.3 rgw/verify/{centos_latest clusters/fixed-2 frontend/civetweb msgr-failures/few objectstore/bluestore-bitmap overrides proto/https rgw_pool_type/ec sharding$/{default} striping$/{stripe-greater-than-chunk} tasks/{0-install cls ragweed s3tests-java s3tests} validater/lockdep} 2
Failure Reason:

Command failed on gibba001 with status 1: 'cd /home/ubuntu/cephtest/s3-tests-java && /opt/gradle/gradle/bin/gradle clean test --rerun-tasks --no-build-cache --tests ObjectTest'

fail 6532978 2021-11-29 05:06:17 2021-12-02 15:41:23 2021-12-02 16:10:44 0:29:21 0:19:28 0:09:53 gibba master centos 8.3 rgw/verify/{centos_latest clusters/fixed-2 frontend/civetweb msgr-failures/few objectstore/bluestore-bitmap overrides proto/http rgw_pool_type/ec-profile sharding$/{single} striping$/{stripe-equals-chunk} tasks/{0-install cls ragweed s3tests-java s3tests} validater/lockdep} 2
Failure Reason:

Command failed on gibba001 with status 1: 'cd /home/ubuntu/cephtest/s3-tests-java && /opt/gradle/gradle/bin/gradle clean test --rerun-tasks --no-build-cache --tests ObjectTest'

fail 6532958 2021-11-29 05:06:04 2021-12-02 15:14:13 2021-12-02 15:41:54 0:27:41 0:16:26 0:11:15 gibba master centos 8.3 rgw/verify/{centos_latest clusters/fixed-2 frontend/beast msgr-failures/few objectstore/filestore-xfs overrides proto/http rgw_pool_type/ec sharding$/{default} striping$/{stripe-greater-than-chunk} tasks/{0-install cls ragweed s3tests-java s3tests} validater/lockdep} 2
Failure Reason:

Command failed on gibba001 with status 1: 'cd /home/ubuntu/cephtest/s3-tests-java && /opt/gradle/gradle/bin/gradle clean test --rerun-tasks --no-build-cache --tests ObjectTest'

pass 6532943 2021-11-29 05:05:53 2021-12-02 14:54:42 2021-12-02 15:15:04 0:20:22 0:09:09 0:11:13 gibba master ubuntu rgw/multifs/{clusters/fixed-2 frontend/civetweb objectstore/bluestore-bitmap overrides rgw_pool_type/ec-profile tasks/rgw_multipart_upload} 2
fail 6532925 2021-11-29 05:05:41 2021-12-02 14:27:43 2021-12-02 14:55:20 0:27:37 0:16:29 0:11:08 gibba master centos 8.3 rgw/verify/{centos_latest clusters/fixed-2 frontend/beast msgr-failures/few objectstore/bluestore-bitmap overrides proto/http rgw_pool_type/ec-profile sharding$/{single} striping$/{stripe-equals-chunk} tasks/{0-install cls ragweed s3tests-java s3tests} validater/lockdep} 2
Failure Reason:

Command failed on gibba001 with status 1: 'cd /home/ubuntu/cephtest/s3-tests-java && /opt/gradle/gradle/bin/gradle clean test --rerun-tasks --no-build-cache --tests ObjectTest'

fail 6532897 2021-11-29 02:05:08 2021-12-02 13:45:48 2021-12-02 14:30:43 0:44:55 0:34:26 0:10:29 gibba master centos 8.stream rbd/librbd/{cache/writethrough clusters/{fixed-3 openstack} config/permit-partial-discard min-compat-client/octopus msgr-failures/few objectstore/bluestore-comp-zlib pool/none supported-random-distro$/{centos_8.stream} workloads/c_api_tests_with_journaling} 3
Failure Reason:

"2021-12-02T14:16:14.227622+0000 mon.a (mon.0) 938 : cluster [WRN] Health check failed: Degraded data redundancy: 2/1384 objects degraded (0.145%), 2 pgs degraded (PG_DEGRADED)" in cluster log

pass 6532882 2021-11-29 02:04:56 2021-12-02 13:15:39 2021-12-02 13:45:44 0:30:05 0:16:10 0:13:55 gibba master centos 8.stream rbd/singleton-bluestore/{all/issue-20295 objectstore/bluestore-bitmap openstack supported-random-distro$/{centos_8.stream}} 4
pass 6532867 2021-11-29 02:04:44 2021-12-02 12:49:31 2021-12-02 13:20:24 0:30:53 0:22:43 0:08:10 gibba master centos 8.3 rbd/mirror/{base/install clients/{mirror-extra mirror} cluster/{2-node openstack} msgr-failures/few objectstore/bluestore-low-osd-mem-target supported-random-distro$/{centos_8} workloads/rbd-mirror-workunit-config-key} 2
pass 6532849 2021-11-29 02:04:30 2021-12-02 12:18:00 2021-12-02 12:48:27 0:30:27 0:20:38 0:09:49 gibba master rhel 8.4 rbd/migration/{1-base/install 2-clusters/{fixed-3 openstack} 3-objectstore/bluestore-hybrid 4-supported-random-distro$/{rhel_8} 5-pool/ec-data-pool 6-prepare/raw-file 7-io-workloads/qemu_xfstests 8-migrate-workloads/execute 9-cleanup/cleanup} 3
pass 6532836 2021-11-29 02:04:19 2021-12-02 11:54:13 2021-12-02 12:20:55 0:26:42 0:15:31 0:11:11 gibba master centos 8.3 rbd/qemu/{cache/none clusters/{fixed-3 openstack} features/readbalance msgr-failures/few objectstore/bluestore-comp-lz4 pool/replicated-data-pool supported-random-distro$/{centos_8} workloads/qemu_bonnie} 3
fail 6532819 2021-11-29 02:04:05 2021-12-02 11:54:54 948 gibba master centos 8.stream rbd/immutable-object-cache/{clusters/{fix-2 openstack} pool/ceph_and_immutable_object_cache supported-random-distro$/{centos_8.stream} workloads/qemu_on_immutable_object_cache_and_thrash} 2
Failure Reason:

Command failed on gibba031 with status 1: 'test -n "$(ls /tmp/ceph-immutable-object-cache )"'