User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
ideepika | 2021-03-12 19:31:07 | 2021-03-12 19:40:16 | 2021-03-13 00:15:26 | 4:35:10 | rados | wip-yuri2-testing-2021-03-09-1006-octopus | gibba | dbb79e0 | 4 | 16 | 5 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
pass | 5958920 | 2021-03-12 19:32:26 | 2021-03-12 19:37:05 | 2021-03-12 20:03:33 | 0:26:28 | 0:13:35 | 0:12:53 | gibba | master | ubuntu | 18.04 | rados/basic/{ceph clusters/{fixed-2 openstack} msgr-failures/few msgr/async objectstore/bluestore-comp-zlib rados supported-random-distro$/{ubuntu_latest} tasks/rados_python} | 2 | |
pass | 5958921 | 2021-03-12 19:32:27 | 2021-03-12 19:40:16 | 2021-03-13 00:15:26 | 4:35:10 | 4:05:43 | 0:29:27 | gibba | master | rhel | 8.2 | rados/objectstore/{backends/objectstore supported-random-distro$/{rhel_latest}} | 1 | |
fail | 5958922 | 2021-03-12 19:32:28 | 2021-03-12 19:40:16 | 2021-03-12 19:58:03 | 0:17:47 | 0:06:39 | 0:11:08 | gibba | master | ubuntu | 18.04 | rados/cephadm/smoke/{distro/ubuntu_18.04_podman fixed-2 start} | 2 | |
Failure Reason:
Command failed on gibba025 with status 5: 'sudo systemctl stop ceph-fc58e9c0-836c-11eb-908a-001a4aab830c@mon.a' |
||||||||||||||
fail | 5958923 | 2021-03-12 19:32:29 | 2021-03-12 19:40:47 | 2021-03-12 20:00:55 | 0:20:08 | 0:05:56 | 0:14:12 | gibba | master | ubuntu | 18.04 | rados/cephadm/smoke-roleless/{distro/ubuntu_18.04_podman start} | 2 | |
Failure Reason:
Command failed on gibba022 with status 5: 'sudo systemctl stop ceph-8e0ed9f6-836d-11eb-908a-001a4aab830c@mon.gibba022' |
||||||||||||||
fail | 5958924 | 2021-03-12 19:32:30 | 2021-03-12 19:45:47 | 2021-03-12 20:17:01 | 0:31:14 | 0:21:43 | 0:09:31 | gibba | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-partial-recovery} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/off msgr-failures/osd-delay msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/rados_api_tests} | 2 | |
Failure Reason:
Command failed (workunit test rados/test.sh) on gibba014 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=dbb79e0547db3abf076b9bc9b6ad97ede0519a0e TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh' |
||||||||||||||
dead | 5958925 | 2021-03-12 19:32:31 | 2021-03-12 19:45:48 | 2021-03-12 20:04:27 | 0:18:39 | 0:01:49 | 0:16:50 | gibba | master | ubuntu | 18.04 | rados/upgrade/mimic-x-singleton/{0-cluster/{openstack start} 1-install/mimic 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{rbd-cls rbd-import-export readwrite snaps-few-objects} 5-workload/{radosbench rbd_api} 6-finish-upgrade 7-octopus 8-workload/{rbd-python snaps-many-objects} bluestore-bitmap thrashosds-health ubuntu_latest} | 4 | |
Failure Reason:
{'Failure object was': {'gibba040.front.sepia.ceph.com': {'msg': 'Failed to checkout branch master', 'stdout': '', 'stderr': "fatal: Unable to create '/home/teuthworker/.cache/src/keys/.git/index.lock': File exists.\n\nAnother git process seems to be running in this repository, e.g.\nan editor opened by 'git commit'. Please make sure all processes\nare terminated then try again. If it still fails, a git process\nmay have crashed in this repository earlier:\nremove the file manually to continue.\n", 'rc': 128, 'invocation': {'module_args': {'repo': 'https://github.com/ceph/keys', 'version': 'master', 'force': True, 'dest': '/home/teuthworker/.cache/src/keys', 'remote': 'origin', 'clone': True, 'update': True, 'verify_commit': False, 'gpg_whitelist': [], 'accept_hostkey': False, 'bare': False, 'recursive': True, 'track_submodules': False, 'refspec': 'None', 'reference': 'None', 'depth': 'None', 'key_file': 'None', 'ssh_opts': 'None', 'executable': 'None', 'umask': 'None', 'archive': 'None', 'separate_git_dir': 'None'}}, 'stdout_lines': [], 'stderr_lines': ["fatal: Unable to create '/home/teuthworker/.cache/src/keys/.git/index.lock': File exists.", '', 'Another git process seems to be running in this repository, e.g.', "an editor opened by 'git commit'. Please make sure all processes", 'are terminated then try again. If it still fails, a git process', 'may have crashed in this repository earlier:', 'remove the file manually to continue.'], '_ansible_no_log': False, 'attempts': 5, 'changed': False, '_ansible_delegated_vars': {}}}, 'Traceback (most recent call last)': 'File "/home/teuthworker/src/git.ceph.com_git_ceph-cm-ansible_master/callback_plugins/failure_log.py", line 44, in log_failure log.error(yaml.safe_dump(failure)) File "/home/teuthworker/src/git.ceph.com_git_teuthology_e8438aef42f98e8ef3e8cdc260580ac63cdc5a1f/virtualenv/lib/python3.6/site-packages/yaml/__init__.py", line 306, in safe_dump return dump_all([data], stream, Dumper=SafeDumper, **kwds) File "/home/teuthworker/src/git.ceph.com_git_teuthology_e8438aef42f98e8ef3e8cdc260580ac63cdc5a1f/virtualenv/lib/python3.6/site-packages/yaml/__init__.py", line 278, in dump_all dumper.represent(data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_e8438aef42f98e8ef3e8cdc260580ac63cdc5a1f/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 27, in represent node = self.represent_data(data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_e8438aef42f98e8ef3e8cdc260580ac63cdc5a1f/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 48, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_e8438aef42f98e8ef3e8cdc260580ac63cdc5a1f/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 207, in represent_dict return self.represent_mapping(\'tag:yaml.org,2002:map\', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_e8438aef42f98e8ef3e8cdc260580ac63cdc5a1f/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 118, in represent_mapping node_value = self.represent_data(item_value) File "/home/teuthworker/src/git.ceph.com_git_teuthology_e8438aef42f98e8ef3e8cdc260580ac63cdc5a1f/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 48, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_e8438aef42f98e8ef3e8cdc260580ac63cdc5a1f/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 207, in represent_dict return self.represent_mapping(\'tag:yaml.org,2002:map\', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_e8438aef42f98e8ef3e8cdc260580ac63cdc5a1f/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 117, in represent_mapping node_key = self.represent_data(item_key) File "/home/teuthworker/src/git.ceph.com_git_teuthology_e8438aef42f98e8ef3e8cdc260580ac63cdc5a1f/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 58, in represent_data node = self.yaml_representers[None](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_e8438aef42f98e8ef3e8cdc260580ac63cdc5a1f/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 231, in represent_undefined raise RepresenterError("cannot represent an object", data)', 'yaml.representer.RepresenterError': "('cannot represent an object', '_ansible_no_log')"} |
||||||||||||||
fail | 5958926 | 2021-03-12 19:32:32 | 2021-03-12 19:51:09 | 2021-03-12 20:12:24 | 0:21:15 | 0:10:30 | 0:10:45 | gibba | master | ubuntu | 18.04 | rados/mgr/{clusters/{2-node-mgr} debug/mgr objectstore/bluestore-hybrid supported-random-distro$/{ubuntu_latest} tasks/progress} | 2 | |
Failure Reason:
Test failure: test_osd_cannot_recover (tasks.mgr.test_progress.TestProgress) |
||||||||||||||
fail | 5958927 | 2021-03-12 19:32:32 | 2021-03-12 19:51:10 | 2021-03-12 20:15:49 | 0:24:39 | 0:16:16 | 0:08:23 | gibba | master | rhel | 8.2 | rados/cephadm/upgrade/{1-start 2-repo_digest/defaut 3-start-upgrade 4-wait distro$/{rhel_latest} fixed-2} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
fail | 5958928 | 2021-03-12 19:32:33 | 2021-03-12 19:53:00 | 2021-03-12 20:10:07 | 0:17:07 | 0:07:22 | 0:09:45 | gibba | master | ubuntu | 18.04 | rados/cephadm/workunits/{distro/ubuntu_18.04_podman task/test_cephadm} | 1 | |
Failure Reason:
Command failed (workunit test cephadm/test_cephadm.sh) on gibba013 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=dbb79e0547db3abf076b9bc9b6ad97ede0519a0e TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh' |
||||||||||||||
dead | 5958929 | 2021-03-12 19:32:34 | 2021-03-12 19:53:00 | 2021-03-12 19:54:44 | 0:01:44 | gibba | master | rhel | 7.7 | rados/cephadm/smoke/{distro/rhel_7 fixed-2 start} | 2 | |||
Failure Reason:
Error reimaging machines: Could not find an image for rhel 7.7 |
||||||||||||||
dead | 5958930 | 2021-03-12 19:32:35 | 2021-03-12 19:54:41 | 2021-03-12 19:54:45 | 0:00:04 | gibba | master | rhel | 7.7 | rados/cephadm/smoke-roleless/{distro/rhel_7 start} | 2 | |||
Failure Reason:
Error reimaging machines: Could not find an image for rhel 7.7 |
||||||||||||||
fail | 5958931 | 2021-03-12 19:32:36 | 2021-03-12 19:54:42 | 2021-03-12 20:14:26 | 0:19:44 | 0:08:26 | 0:11:18 | gibba | master | ubuntu | 18.04 | rados/cephadm/with-work/{distro/ubuntu_18.04_podman fixed-2 mode/root msgr/async-v1only start tasks/rados_python} | 2 | |
Failure Reason:
Command failed on gibba004 with status 5: 'sudo systemctl stop ceph-3a7efbb6-836f-11eb-908a-001a4aab830c@mon.a' |
||||||||||||||
fail | 5958932 | 2021-03-12 19:32:37 | 2021-03-12 19:55:02 | 2021-03-12 20:10:07 | 0:15:05 | 0:05:42 | 0:09:23 | gibba | master | ubuntu | 18.04 | rados/cephadm/workunits/{distro/ubuntu_18.04_podman task/test_orch_cli} | 1 | |
Failure Reason:
Command failed on gibba003 with status 5: 'sudo systemctl stop ceph-bef289ea-836e-11eb-908a-001a4aab830c@mon.a' |
||||||||||||||
fail | 5958933 | 2021-03-12 19:32:37 | 2021-03-12 19:55:03 | 2021-03-12 20:12:01 | 0:16:58 | 0:07:08 | 0:09:50 | gibba | master | ubuntu | 18.04 | rados/cephadm/workunits/{distro/ubuntu_18.04_podman task/test_adoption} | 1 | |
Failure Reason:
Command failed (workunit test cephadm/test_adoption.sh) on gibba036 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=dbb79e0547db3abf076b9bc9b6ad97ede0519a0e TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_adoption.sh' |
||||||||||||||
fail | 5958934 | 2021-03-12 19:32:38 | 2021-03-12 19:55:03 | 2021-03-12 20:12:00 | 0:16:57 | 0:06:34 | 0:10:23 | gibba | master | ubuntu | 18.04 | rados/cephadm/smoke/{distro/ubuntu_18.04_podman fixed-2 start} | 2 | |
Failure Reason:
Command failed on gibba007 with status 5: 'sudo systemctl stop ceph-f69f2be6-836e-11eb-908a-001a4aab830c@mon.a' |
||||||||||||||
fail | 5958935 | 2021-03-12 19:32:39 | 2021-03-12 19:55:03 | 2021-03-12 20:13:18 | 0:18:15 | 0:05:15 | 0:13:00 | gibba | master | ubuntu | 18.04 | rados/cephadm/smoke-roleless/{distro/ubuntu_18.04_podman start} | 2 | |
Failure Reason:
Command failed on gibba025 with status 5: 'sudo systemctl stop ceph-354467a8-836f-11eb-908a-001a4aab830c@mon.gibba025' |
||||||||||||||
fail | 5958936 | 2021-03-12 19:32:40 | 2021-03-12 19:58:14 | 2021-03-12 20:13:16 | 0:15:02 | 0:06:27 | 0:08:35 | gibba | master | ubuntu | 18.04 | rados/cephadm/workunits/{distro/ubuntu_18.04_podman task/test_cephadm} | 1 | |
Failure Reason:
Command failed (workunit test cephadm/test_cephadm.sh) on gibba024 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=dbb79e0547db3abf076b9bc9b6ad97ede0519a0e TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh' |
||||||||||||||
fail | 5958937 | 2021-03-12 19:32:41 | 2021-03-12 19:58:14 | 2021-03-12 20:20:17 | 0:22:03 | 0:09:00 | 0:13:03 | gibba | master | ubuntu | 18.04 | rados/cephadm/with-work/{distro/ubuntu_18.04_podman fixed-2 mode/root msgr/async start tasks/rados_api_tests} | 2 | |
Failure Reason:
Command failed on gibba022 with status 5: 'sudo systemctl stop ceph-1c9a3fce-8370-11eb-908a-001a4aab830c@mon.a' |
||||||||||||||
dead | 5958938 | 2021-03-12 19:32:41 | 2021-03-12 20:01:05 | 2021-03-12 20:03:49 | 0:02:44 | gibba | master | rhel | 7.7 | rados/cephadm/smoke/{distro/rhel_7 fixed-2 start} | 2 | |||
Failure Reason:
Error reimaging machines: Could not find an image for rhel 7.7 |
||||||||||||||
dead | 5958939 | 2021-03-12 19:32:42 | 2021-03-12 20:03:45 | 2021-03-12 20:04:10 | 0:00:25 | gibba | master | rhel | 7.7 | rados/cephadm/smoke-roleless/{distro/rhel_7 start} | 2 | |||
Failure Reason:
Error reimaging machines: Could not find an image for rhel 7.7 |
||||||||||||||
fail | 5958940 | 2021-03-12 19:32:43 | 2021-03-12 20:04:06 | 2021-03-12 20:22:56 | 0:18:50 | 0:12:29 | 0:06:21 | gibba | master | rhel | 8.2 | rados/mgr/{clusters/{2-node-mgr} debug/mgr objectstore/bluestore-comp-zlib supported-random-distro$/{rhel_latest} tasks/progress} | 2 | |
Failure Reason:
Test failure: test_osd_cannot_recover (tasks.mgr.test_progress.TestProgress) |
||||||||||||||
fail | 5958941 | 2021-03-12 19:32:44 | 2021-03-12 20:04:16 | 2021-03-12 20:19:35 | 0:15:19 | 0:05:38 | 0:09:41 | gibba | master | ubuntu | 18.04 | rados/cephadm/workunits/{distro/ubuntu_18.04_podman task/test_orch_cli} | 1 | |
Failure Reason:
Command failed on gibba045 with status 5: 'sudo systemctl stop ceph-0527639e-8370-11eb-908a-001a4aab830c@mon.a' |
||||||||||||||
pass | 5958942 | 2021-03-12 19:32:45 | 2021-03-12 20:04:36 | 2021-03-12 20:35:23 | 0:30:47 | 0:24:49 | 0:05:58 | gibba | master | rhel | 8.2 | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast msgr-failures/osd-delay objectstore/bluestore-comp-zstd rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{rhel_latest} thrashers/default thrashosds-health workloads/ec-rados-plugin=clay-k=4-m=2} | 2 | |
fail | 5958943 | 2021-03-12 19:32:45 | 2021-03-12 20:04:37 | 2021-03-12 20:19:36 | 0:14:59 | 0:05:56 | 0:09:03 | gibba | master | ubuntu | 18.04 | rados/cephadm/workunits/{distro/ubuntu_18.04_podman task/test_adoption} | 1 | |
Failure Reason:
Command failed (workunit test cephadm/test_adoption.sh) on gibba015 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=dbb79e0547db3abf076b9bc9b6ad97ede0519a0e TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_adoption.sh' |
||||||||||||||
pass | 5958944 | 2021-03-12 19:32:46 | 2021-03-12 20:04:37 | 2021-03-12 20:42:53 | 0:38:16 | 0:23:40 | 0:14:36 | gibba | master | centos | 8.1 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-partial-recovery} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/upmap msgr-failures/fastclose msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_latest} thrashers/default thrashosds-health workloads/rados_api_tests} | 2 |