Name Machine Type Up Locked Locked Since Locked By OS Type OS Version Arch Description
gibba029.front.sepia.ceph.com gibba True True 2021-12-02 19:45:30.795246 sage@teuthology centos 8 x86_64 large-scale test cluster
Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
dead 6538953 2021-12-02 06:00:48 2021-12-02 19:25:11 2021-12-02 19:40:22 0:15:11 gibba master ubuntu 18.04 rbd/librbd/{cache/writearound clusters/{fixed-3 openstack} config/none min-compat-client/octopus msgr-failures/few objectstore/bluestore-low-osd-mem-target pool/small-cache-pool supported-random-distro$/{ubuntu_18.04} workloads/c_api_tests} 3
Failure Reason:

Error reimaging machines: reached maximum tries (60) after waiting for 900 seconds

pass 6538820 2021-12-02 05:01:36 2021-12-02 05:59:17 2021-12-02 06:39:23 0:40:06 0:29:36 0:10:30 gibba master centos 8.3 smoke/basic/{clusters/{fixed-3-cephfs openstack} objectstore/bluestore-bitmap supported-random-distro$/{centos_8} tasks/{0-install test/rbd_api_tests}} 3
pass 6538222 2021-12-01 10:15:48 2021-12-01 16:54:11 2021-12-01 17:29:09 0:34:58 0:26:21 0:08:37 gibba master centos 8.3 rados:thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/connectivity msgr-failures/fastclose msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_8} thrashers/mapgap thrashosds-health workloads/rados_api_tests} 2
pass 6538213 2021-12-01 10:15:40 2021-12-01 16:29:32 2021-12-01 16:54:26 0:24:54 0:16:40 0:08:14 gibba master centos 8.3 rados:thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/classic msgr-failures/osd-dispatch-delay msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_8} thrashers/pggrow thrashosds-health workloads/cache-agent-small} 2
pass 6538203 2021-12-01 10:15:32 2021-12-01 16:06:33 2021-12-01 16:29:50 0:23:17 0:13:28 0:09:49 gibba master centos 8.3 rados:thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/set-chunks-read} 2
dead 6534977 2021-11-30 08:44:13 2021-12-01 15:52:05 2021-12-01 16:07:18 0:15:13 0:03:30 0:11:43 gibba master ubuntu 20.04 rgw/multifs/{clusters/fixed-2 frontend/beast ignore-pg-availability objectstore/bluestore-bitmap overrides rgw_pool_type/ec tasks/rgw_s3tests ubuntu_latest} 2
Failure Reason:

{'Failure object was': {'gibba016.front.sepia.ceph.com': {'msg': 'Failed to update apt cache: ', 'invocation': {'module_args': {'dpkg_options': 'force-confdef,force-confold', 'autoremove': False, 'force': False, 'force_apt_get': False, 'policy_rc_d': 'None', 'package': 'None', 'autoclean': False, 'install_recommends': 'None', 'purge': False, 'allow_unauthenticated': False, 'state': 'present', 'upgrade': 'None', 'update_cache': True, 'default_release': 'None', 'only_upgrade': False, 'deb': 'None', 'cache_valid_time': 0}}, '_ansible_no_log': False, 'attempts': 24, 'changed': False}}, 'Traceback (most recent call last)': 'File "/home/teuthworker/src/git.ceph.com_git_ceph-cm-ansible_master/callback_plugins/failure_log.py", line 44, in log_failure log.error(yaml.safe_dump(failure)) File "/home/teuthworker/src/git.ceph.com_git_teuthology_844d13778e221f3c1f3cb6626445c4bc2f71766d/virtualenv/lib/python3.6/site-packages/yaml/__init__.py", line 306, in safe_dump return dump_all([data], stream, Dumper=SafeDumper, **kwds) File "/home/teuthworker/src/git.ceph.com_git_teuthology_844d13778e221f3c1f3cb6626445c4bc2f71766d/virtualenv/lib/python3.6/site-packages/yaml/__init__.py", line 278, in dump_all dumper.represent(data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_844d13778e221f3c1f3cb6626445c4bc2f71766d/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 27, in represent node = self.represent_data(data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_844d13778e221f3c1f3cb6626445c4bc2f71766d/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 48, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_844d13778e221f3c1f3cb6626445c4bc2f71766d/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 207, in represent_dict return self.represent_mapping(\'tag:yaml.org,2002:map\', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_844d13778e221f3c1f3cb6626445c4bc2f71766d/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 118, in represent_mapping node_value = self.represent_data(item_value) File "/home/teuthworker/src/git.ceph.com_git_teuthology_844d13778e221f3c1f3cb6626445c4bc2f71766d/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 48, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_844d13778e221f3c1f3cb6626445c4bc2f71766d/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 207, in represent_dict return self.represent_mapping(\'tag:yaml.org,2002:map\', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_844d13778e221f3c1f3cb6626445c4bc2f71766d/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 117, in represent_mapping node_key = self.represent_data(item_key) File "/home/teuthworker/src/git.ceph.com_git_teuthology_844d13778e221f3c1f3cb6626445c4bc2f71766d/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 58, in represent_data node = self.yaml_representers[None](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_844d13778e221f3c1f3cb6626445c4bc2f71766d/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 231, in represent_undefined raise RepresenterError("cannot represent an object", data)', 'yaml.representer.RepresenterError': "('cannot represent an object', '_ansible_no_log')"}

fail 6534949 2021-11-30 08:43:52 2021-12-01 14:32:22 2021-12-01 15:54:11 1:21:49 1:12:07 0:09:42 gibba master centos 8.0 rgw/verify/{0-install centos_latest clusters/fixed-2 datacache/no_datacache frontend/beast ignore-pg-availability msgr-failures/few objectstore/filestore-xfs overrides proto/http rgw_pool_type/ec sharding$/{default} striping$/{stripe-greater-than-chunk} tasks/{cls ragweed reshard s3tests-java s3tests} validater/valgrind} 2
Failure Reason:

saw valgrind issues

dead 6534022 2021-11-30 03:16:51 2021-12-02 19:06:31 2021-12-02 19:24:25 0:17:54 gibba master rhel 8.4 fs/workload/{begin clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} distro/{rhel_8} mount/fuse ms_mode/{crc} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 scrub/no standby-replay tasks/{0-check-counter workunit/suites/pjd} wsync/{yes}} 3
pass 6534009 2021-11-30 03:16:42 2021-12-02 18:30:21 2021-12-02 19:09:21 0:39:00 0:17:22 0:21:38 gibba master ubuntu 20.04 fs/workload/{begin clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} distro/{centos_8.stream} mount/kclient/{mount overrides/{distro/testing/{flavor/ubuntu_latest k-testing} ms-die-on-skipped}} ms_mode/{secure} objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/yes standby-replay tasks/{0-check-counter workunit/suites/fsx} wsync/{no}} 3
pass 6534001 2021-11-30 03:16:36 2021-12-02 18:11:35 2021-12-02 18:42:10 0:30:35 0:16:16 0:14:19 gibba master centos 8.stream fs/functional/{begin clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{centos_8.stream} mount/fuse objectstore/bluestore-bitmap overrides/{no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/metrics} 2
pass 6533989 2021-11-30 03:16:28 2021-12-02 17:30:34 2021-12-02 18:17:30 0:46:56 0:36:24 0:10:32 gibba master centos 8.3 fs/volumes/{begin clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount overrides/{distro/testing/{flavor/centos_latest k-testing} ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/volumes/{overrides test/clone}} 2
pass 6533974 2021-11-30 03:16:17 2021-12-02 17:09:06 2021-12-02 17:28:36 0:19:30 0:10:08 0:09:22 gibba master centos 8.stream fs/libcephfs/{begin clusters/1-mds-1-client-coloc conf/{client mds mon osd} distro/{centos_8.stream} objectstore/bluestore-bitmap overrides/{no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/client} 2
fail 6533023 2021-11-29 05:06:47 2021-12-02 16:42:06 2021-12-02 17:09:28 0:27:22 0:17:49 0:09:33 gibba master centos 8.0 rgw/verify/{centos_latest clusters/fixed-2 frontend/beast msgr-failures/few objectstore/bluestore-bitmap overrides proto/http rgw_pool_type/replicated sharding$/{default} striping$/{stripe-greater-than-chunk} tasks/{0-install cls ragweed s3tests-java s3tests} validater/valgrind} 2
Failure Reason:

Command failed on gibba029 with status 1: 'cd /home/ubuntu/cephtest/s3-tests-java && /opt/gradle/gradle/bin/gradle clean test --rerun-tasks --no-build-cache --tests ObjectTest'

fail 6533003 2021-11-29 05:06:34 2021-12-02 16:15:36 2021-12-02 16:42:22 0:26:46 0:18:12 0:08:34 gibba master centos 8.3 rgw/verify/{centos_latest clusters/fixed-2 frontend/civetweb msgr-failures/few objectstore/filestore-xfs overrides proto/http rgw_pool_type/ec-profile sharding$/{single} striping$/{stripe-equals-chunk} tasks/{0-install cls ragweed s3tests-java s3tests} validater/lockdep} 2
Failure Reason:

Command failed on gibba029 with status 1: 'cd /home/ubuntu/cephtest/s3-tests-java && /opt/gradle/gradle/bin/gradle clean test --rerun-tasks --no-build-cache --tests ObjectTest'

pass 6532988 2021-11-29 05:06:24 2021-12-02 15:53:48 2021-12-02 16:15:41 0:21:53 0:11:14 0:10:39 gibba master ubuntu rgw/thrash/{civetweb clusters/fixed-2 install objectstore/bluestore-bitmap thrasher/default thrashosds-health workload/rgw_multipart_upload} 2
fail 6532965 2021-11-29 05:06:08 2021-12-02 15:24:56 2021-12-02 15:53:55 0:28:59 0:17:41 0:11:18 gibba master centos 8.3 rgw/verify/{centos_latest clusters/fixed-2 frontend/beast msgr-failures/few objectstore/filestore-xfs overrides proto/https rgw_pool_type/replicated sharding$/{single} striping$/{stripe-equals-chunk} tasks/{0-install cls ragweed s3tests-java s3tests} validater/lockdep} 2
Failure Reason:

Command failed on gibba029 with status 1: 'cd /home/ubuntu/cephtest/s3-tests-java && /opt/gradle/gradle/bin/gradle clean test --rerun-tasks --no-build-cache --tests ObjectTest'

pass 6532951 2021-11-29 05:05:59 2021-12-02 15:04:09 2021-12-02 15:26:55 0:22:46 0:11:35 0:11:11 gibba master rgw/multifs/{clusters/fixed-2 frontend/civetweb objectstore/bluestore-bitmap overrides rgw_pool_type/replicated tasks/rgw_s3tests} 2
pass 6532898 2021-11-29 02:05:09 2021-12-02 13:45:48 2021-12-02 15:04:54 1:19:06 1:08:24 0:10:42 gibba master ubuntu 20.04 rbd/mirror-thrash/{base/install clients/mirror cluster/{2-node openstack} msgr-failures/few objectstore/bluestore-comp-lz4 policy/simple rbd-mirror/four-per-cluster supported-random-distro$/{ubuntu_latest} workloads/rbd-mirror-journal-stress-workunit} 2
pass 6532887 2021-11-29 02:05:00 2021-12-02 13:27:32 2021-12-02 13:46:51 0:19:19 0:10:06 0:09:13 gibba master centos 8.3 rbd/nbd/{base/install cluster/{fixed-3 openstack} msgr-failures/few objectstore/filestore-xfs supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/rbd_nbd} 3
fail 6532878 2021-11-29 02:04:53 2021-12-02 13:11:27 2021-12-02 13:27:30 0:16:03 0:07:51 0:08:12 gibba master rhel 8.4 rbd/iscsi/{base/install cluster/{fixed-3 openstack} supported-random-distro$/{rhel_8} workloads/cephadm_iscsi} 3
Failure Reason:

Command failed on gibba026 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:69b04de2932d00fc7fcaa14c718595ec42f18e67 -v bootstrap --fsid 2e1267da-5373-11ec-8c2e-001a4aab830c --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id x --orphan-initial-daemons --skip-monitoring-stack --mon-ip 172.21.2.126 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring'