Name Machine Type Up Locked Locked Since Locked By OS Type OS Version Arch Description
gibba009.front.sepia.ceph.com gibba True True 2021-12-02 19:33:15.980117 sage@teuthology centos 8 x86_64 large-scale test cluster
Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
dead 6538951 2021-12-02 06:00:46 2021-12-02 19:25:08 2021-12-02 19:42:06 0:16:58 gibba master centos 8.3 rbd/valgrind/{base/install centos_latest clusters/{fixed-1 openstack} objectstore/bluestore-low-osd-mem-target validator/memcheck workloads/python_api_tests_with_journaling} 1
pass 6538823 2021-12-02 05:01:38 2021-12-02 06:18:21 2021-12-02 06:49:59 0:31:38 0:22:24 0:09:14 gibba master centos 8.stream smoke/basic/{clusters/{fixed-3-cephfs openstack} objectstore/bluestore-bitmap supported-random-distro$/{centos_8.stream} tasks/{0-install test/rbd_python_api_tests}} 3
pass 6538813 2021-12-02 05:01:30 2021-12-02 05:39:20 2021-12-02 06:18:29 0:39:09 0:28:25 0:10:44 gibba master ubuntu 20.04 smoke/basic/{clusters/{fixed-3-cephfs openstack} objectstore/bluestore-bitmap supported-random-distro$/{ubuntu_latest} tasks/{0-install test/rados_api_tests}} 3
pass 6538807 2021-12-02 05:01:25 2021-12-02 05:10:21 2021-12-02 05:40:52 0:30:31 0:17:33 0:12:58 gibba master centos 8.3 smoke/basic/{clusters/{fixed-3-cephfs openstack} objectstore/bluestore-bitmap supported-random-distro$/{centos_8} tasks/{0-install test/kclient_workunit_direct_io}} 3
dead 6538214 2021-12-01 10:15:41 2021-12-01 16:29:53 2021-12-01 18:29:03 1:59:10 gibba master ubuntu 20.04 rados:thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/connectivity msgr-failures/fastclose msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/cache-pool-snaps-readproxy} 2
Failure Reason:

Error reimaging machines: reached maximum tries (100) after waiting for 600 seconds

dead 6534942 2021-11-30 08:43:46 2021-12-01 14:32:16 2021-12-01 16:30:49 1:58:33 gibba master ubuntu 20.04 rgw/multifs/{clusters/fixed-2 frontend/beast ignore-pg-availability objectstore/filestore-xfs overrides rgw_pool_type/replicated tasks/rgw_ragweed ubuntu_latest} 2
Failure Reason:

Error reimaging machines: reached maximum tries (100) after waiting for 600 seconds

dead 6533995 2021-11-30 03:16:32 2021-12-02 17:46:49 2021-12-02 19:24:10 1:37:21 gibba master ubuntu 20.04 fs/thrash/workloads/{begin clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8.stream} mount/kclient/{mount overrides/{distro/testing/{flavor/ubuntu_latest k-testing} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-bitmap overrides/{frag session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/5 tasks/{1-thrash/mon 2-workunit/suites/ffsb}} 2
pass 6533980 2021-11-30 03:16:21 2021-12-02 17:19:10 2021-12-02 17:48:08 0:28:58 0:19:01 0:09:57 gibba master centos 8.stream fs/permission/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{centos_8.stream} mount/fuse objectstore-ec/bluestore-bitmap overrides/{whitelist_health whitelist_wrongly_marked_down} tasks/cfuse_workunit_misc} 2
pass 6533961 2021-11-30 03:16:08 2021-12-02 16:46:39 2021-12-02 17:12:36 0:25:57 0:11:22 0:14:35 gibba master ubuntu 20.04 fs/thrash/workloads/{begin clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/testing/{flavor/ubuntu_latest k-testing} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp-ec-root overrides/{frag session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/mon 2-workunit/fs/trivial_sync}} 2
dead 6533013 2021-11-29 05:06:41 2021-12-02 16:32:31 2021-12-02 16:48:48 0:16:17 gibba master ubuntu rgw/multifs/{clusters/fixed-2 frontend/civetweb objectstore/bluestore-bitmap overrides rgw_pool_type/replicated tasks/rgw_multipart_upload} 2
Failure Reason:

SSH connection to gibba036 was lost: 'uname -r'

fail 6532999 2021-11-29 05:06:31 2021-12-02 16:13:24 2021-12-02 16:33:00 0:19:36 0:06:33 0:13:03 gibba master rgw/multifs/{clusters/fixed-2 frontend/civetweb objectstore/bluestore-bitmap overrides rgw_pool_type/ec tasks/rgw_ragweed} 2
Failure Reason:

Command failed on gibba009 with status 2: 'cd /home/ubuntu/cephtest/ragweed && ./bootstrap'

pass 6532990 2021-11-29 05:06:25 2021-12-02 15:55:49 2021-12-02 16:14:56 0:19:07 0:06:39 0:12:28 gibba master ubuntu rgw/multifs/{clusters/fixed-2 frontend/civetweb objectstore/filestore-xfs overrides rgw_pool_type/ec tasks/rgw_user_quota} 2
fail 6532967 2021-11-29 05:06:10 2021-12-02 15:27:07 2021-12-02 15:57:02 0:29:55 0:19:14 0:10:41 gibba master centos 8.0 rgw/verify/{centos_latest clusters/fixed-2 frontend/civetweb msgr-failures/few objectstore/bluestore-bitmap overrides proto/http rgw_pool_type/ec-profile sharding$/{single} striping$/{stripe-equals-chunk} tasks/{0-install cls ragweed s3tests-java s3tests} validater/valgrind} 2
Failure Reason:

Command failed on gibba009 with status 1: 'cd /home/ubuntu/cephtest/s3-tests-java && /opt/gradle/gradle/bin/gradle clean test --rerun-tasks --no-build-cache --tests ObjectTest'

pass 6532954 2021-11-29 05:06:01 2021-12-02 15:08:41 2021-12-02 15:28:02 0:19:21 0:06:53 0:12:28 gibba master ubuntu rgw/multifs/{clusters/fixed-2 frontend/civetweb objectstore/filestore-xfs overrides rgw_pool_type/ec-profile tasks/rgw_user_quota} 2
pass 6532911 2021-11-29 02:05:20 2021-12-02 14:09:56 2021-12-02 15:10:30 1:00:34 0:50:57 0:09:37 gibba master centos 8.3 rbd/valgrind/{base/install centos_latest clusters/{fixed-1 openstack} objectstore/bluestore-comp-snappy validator/memcheck workloads/rbd_mirror} 1
pass 6532890 2021-11-29 02:05:03 2021-12-02 13:34:34 2021-12-02 14:09:50 0:35:16 0:19:31 0:15:45 gibba master centos 8.3 rbd/qemu/{cache/writethrough clusters/{fixed-3 openstack} features/readbalance msgr-failures/few objectstore/bluestore-comp-zstd pool/ec-data-pool supported-random-distro$/{centos_8} workloads/qemu_bonnie} 3
pass 6532879 2021-11-29 02:04:54 2021-12-02 13:12:37 2021-12-02 13:41:13 0:28:36 0:20:53 0:07:43 gibba master rhel 8.4 rbd/maintenance/{base/install clusters/{fixed-3 openstack} objectstore/bluestore-bitmap qemu/xfstests supported-random-distro$/{rhel_8} workloads/dynamic_features} 3
fail 6532860 2021-11-29 02:04:39 2021-12-02 12:35:26 2021-12-02 13:14:24 0:38:58 0:26:47 0:12:11 gibba master ubuntu 20.04 rbd/librbd/{cache/writearound clusters/{fixed-3 openstack} config/copy-on-read min-compat-client/octopus msgr-failures/few objectstore/bluestore-comp-lz4 pool/small-cache-pool supported-random-distro$/{ubuntu_latest} workloads/c_api_tests} 3
Failure Reason:

"2021-12-02T13:02:30.559392+0000 mon.a (mon.0) 713 : cluster [WRN] Health check failed: Reduced data availability: 1 pg inactive, 1 pg peering (PG_AVAILABILITY)" in cluster log

fail 6532846 2021-11-29 02:04:27 2021-12-02 12:11:59 2021-12-02 12:36:32 0:24:33 0:12:17 0:12:16 gibba master centos 8.stream rbd/encryption/{cache/writeback clusters/{fixed-3 openstack} features/defaults msgr-failures/few objectstore/bluestore-comp-zlib pool/small-cache-pool supported-random-distro$/{centos_8.stream} workloads/qemu_xfstests_luks2} 3
Failure Reason:

Command failed on gibba018 with status 1: 'test -f /home/ubuntu/cephtest/archive/qemu/client.0/success'

pass 6532830 2021-11-29 02:04:14 2021-12-02 11:47:10 2021-12-02 12:15:54 0:28:44 0:20:13 0:08:31 gibba master rhel 8.4 rbd/encryption/{cache/writearound clusters/{fixed-3 openstack} features/defaults msgr-failures/few objectstore/bluestore-comp-snappy pool/replicated-data-pool supported-random-distro$/{rhel_8} workloads/qemu_xfstests_luks1} 3