ID
Status
Ceph Branch
Suite Branch
Teuthology Branch
Machine
OS
Nodes
Description
Failure Reason
octopus
octopus
master
smithi
ubuntu 18.04
rados/cephadm/smoke/{distro/ubuntu_18.04_podman fixed-2 start}
Command failed on smithi188 with status 5: 'sudo systemctl stop ceph-b673e954-7215-11eb-900e-001a4aab830c@mon.a'
octopus
octopus
master
smithi
ubuntu 18.04
rados/cephadm/smoke-roleless/{distro/ubuntu_18.04_podman start}
Command failed on smithi135 with status 5: 'sudo systemctl stop ceph-185b8d12-7215-11eb-900e-001a4aab830c@mon.smithi135'
octopus
octopus
master
smithi
ubuntu 18.04
rados/mgr/{clusters/{2-node-mgr} debug/mgr objectstore/bluestore-hybrid supported-random-distro$/{ubuntu_latest} tasks/progress}
Test failure: test_osd_came_back (tasks.mgr.test_progress.TestProgress)
octopus
octopus
master
smithi
rhel 8.0
rados/cephadm/upgrade/{1-start 2-repo_digest/defaut 3-start-upgrade 4-wait distro$/{rhel_8.0} fixed-2}
reached maximum tries (180) after waiting for 180 seconds
octopus
octopus
master
smithi
centos 8.1
rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} msgr-failures/few objectstore/bluestore-bitmap rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{centos_latest} thrashers/default thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1}
octopus
octopus
master
smithi
ubuntu 18.04
rados/cephadm/workunits/{distro/ubuntu_18.04_podman task/test_cephadm}
Command failed (workunit test cephadm/test_cephadm.sh) on smithi032 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=ef0cbaddda96a295b3751035095dce0a63604552 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh'
octopus
octopus
master
smithi
ubuntu 18.04
rados/cephadm/with-work/{distro/ubuntu_18.04_podman fixed-2 mode/root msgr/async-v1only start tasks/rados_python}
Command failed on smithi133 with status 5: 'sudo systemctl stop ceph-ec4bf83a-7217-11eb-900e-001a4aab830c@mon.a'
octopus
octopus
master
smithi
ubuntu 18.04
rados/cephadm/workunits/{distro/ubuntu_18.04_podman task/test_orch_cli}
Command failed on smithi074 with status 5: 'sudo systemctl stop ceph-7c3e80b2-7217-11eb-900e-001a4aab830c@mon.a'
octopus
octopus
master
smithi
ubuntu 18.04
rados/cephadm/workunits/{distro/ubuntu_18.04_podman task/test_adoption}
Command failed (workunit test cephadm/test_adoption.sh) on smithi038 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=ef0cbaddda96a295b3751035095dce0a63604552 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_adoption.sh'
octopus
octopus
master
smithi
centos 8.1
rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} msgr-failures/fastclose objectstore/bluestore-comp-snappy rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{centos_latest} thrashers/mapgap thrashosds-health workloads/ec-rados-plugin=lrc-k=4-m=2-l=3}
octopus
octopus
master
smithi
ubuntu 18.04
rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/luminous-v1only backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} msgr-failures/few rados thrashers/careful thrashosds-health workloads/snaps-few-objects}
octopus
octopus
master
smithi
centos 8.1
rados/cephadm/with-work/{distro/centos_latest fixed-2 mode/root msgr/async-v2only start tasks/rados_api_tests}
octopus
octopus
master
smithi
ubuntu 18.04
rados/cephadm/smoke/{distro/ubuntu_18.04_podman fixed-2 start}
Command failed on smithi192 with status 5: 'sudo systemctl stop ceph-dae0abb6-7219-11eb-900e-001a4aab830c@mon.a'
octopus
octopus
master
smithi
ubuntu 18.04
rados/cephadm/smoke-roleless/{distro/ubuntu_18.04_podman start}
Command failed on smithi105 with status 5: 'sudo systemctl stop ceph-98e7cf6e-7219-11eb-900e-001a4aab830c@mon.smithi105'
octopus
octopus
master
smithi
centos 8.1
rados/cephadm/upgrade/{1-start 2-repo_digest/defaut 3-start-upgrade 4-wait distro$/{centos_latest} fixed-2}
reached maximum tries (180) after waiting for 180 seconds
octopus
octopus
master
smithi
ubuntu 18.04
rados/cephadm/workunits/{distro/ubuntu_18.04_podman task/test_cephadm}
Command failed (workunit test cephadm/test_cephadm.sh) on smithi038 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=ef0cbaddda96a295b3751035095dce0a63604552 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh'
octopus
octopus
master
smithi
ubuntu 18.04
rados/cephadm/with-work/{distro/ubuntu_18.04_podman fixed-2 mode/root msgr/async start tasks/rados_api_tests}
Command failed on smithi133 with status 5: 'sudo systemctl stop ceph-48dd8eea-721a-11eb-900e-001a4aab830c@mon.a'
octopus
octopus
master
smithi
centos 8.1
rados/singleton/{all/thrash_cache_writeback_proxy_none msgr-failures/few msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{centos_latest}}
Command crashed: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 400000 --objects 10000 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 600 --op read 100 --op write 50 --op delete 50 --op copy_from 50 --op write_excl 50 --pool base'
octopus
octopus
master
smithi
centos 8.1
rados/mgr/{clusters/{2-node-mgr} debug/mgr objectstore/bluestore-comp-zlib supported-random-distro$/{centos_latest} tasks/progress}
Test failure: test_osd_came_back (tasks.mgr.test_progress.TestProgress)
octopus
octopus
master
smithi
ubuntu 18.04
rados/cephadm/workunits/{distro/ubuntu_18.04_podman task/test_orch_cli}
Command failed on smithi074 with status 5: 'sudo systemctl stop ceph-810371ae-721a-11eb-900e-001a4aab830c@mon.a'
octopus
octopus
master
smithi
centos 8.1
rados/cephadm/upgrade/{1-start 2-repo_digest/repo_digest 3-start-upgrade 4-wait distro$/{centos_latest} fixed-2}
reached maximum tries (180) after waiting for 180 seconds
octopus
octopus
master
smithi
ubuntu 18.04
rados/cephadm/workunits/{distro/ubuntu_18.04_podman task/test_adoption}
Command failed (workunit test cephadm/test_adoption.sh) on smithi199 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=ef0cbaddda96a295b3751035095dce0a63604552 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_adoption.sh'