ID
Status
Ceph Branch
Suite Branch
Teuthology Branch
Machine
OS
Nodes
Description
Failure Reason
octopus
octopus
master
smithi
ubuntu 18.04
rados/cephadm/smoke/{distro/ubuntu_18.04_podman fixed-2 start}
Command failed on smithi064 with status 5: 'sudo systemctl stop ceph-c44d4d46-7395-11eb-901d-001a4aab830c@mon.a'
octopus
octopus
master
smithi
ubuntu 18.04
rados/cephadm/smoke-roleless/{distro/ubuntu_18.04_podman start}
Command failed on smithi007 with status 5: 'sudo systemctl stop ceph-79b45a04-7395-11eb-901d-001a4aab830c@mon.smithi007'
octopus
octopus
master
smithi
ubuntu 18.04
rados/mgr/{clusters/{2-node-mgr} debug/mgr objectstore/bluestore-hybrid supported-random-distro$/{ubuntu_latest} tasks/progress}
Test failure: test_osd_came_back (tasks.mgr.test_progress.TestProgress)
octopus
octopus
master
smithi
ubuntu 18.04
rados/cephadm/workunits/{distro/ubuntu_18.04_podman task/test_cephadm}
Command failed (workunit test cephadm/test_cephadm.sh) on smithi164 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=171a07719aa017f5b7103000f9d916d086c7324f TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh'
octopus
octopus
master
smithi
ubuntu 18.04
rados/cephadm/with-work/{distro/ubuntu_18.04_podman fixed-2 mode/root msgr/async-v1only start tasks/rados_python}
Command failed on smithi061 with status 5: 'sudo systemctl stop ceph-450e939a-7396-11eb-901d-001a4aab830c@mon.a'
octopus
octopus
master
smithi
ubuntu 18.04
rados/cephadm/workunits/{distro/ubuntu_18.04_podman task/test_orch_cli}
Command failed on smithi155 with status 5: 'sudo systemctl stop ceph-0a8f5d3a-7396-11eb-901d-001a4aab830c@mon.a'
octopus
octopus
master
smithi
ubuntu 18.04
rados/singleton/{all/pg-autoscaler-progress-off msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{ubuntu_latest}}
"2021-02-20T16:11:48.542158+0000 mon.a (mon.0) 300 : cluster [WRN] Health check failed: 1 pools have both target_size_bytes and target_size_ratio set (POOL_HAS_TARGET_SIZE_BYTES_AND_RATIO)" in cluster log
octopus
octopus
master
smithi
ubuntu 18.04
rados/cephadm/workunits/{distro/ubuntu_18.04_podman task/test_adoption}
Command failed (workunit test cephadm/test_adoption.sh) on smithi043 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=171a07719aa017f5b7103000f9d916d086c7324f TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_adoption.sh'
octopus
octopus
master
smithi
ubuntu 18.04
rados/cephadm/smoke/{distro/ubuntu_18.04_podman fixed-2 start}
Command failed on smithi027 with status 5: 'sudo systemctl stop ceph-31bdf20e-7396-11eb-901d-001a4aab830c@mon.a'
octopus
octopus
master
smithi
ubuntu 18.04
rados/cephadm/smoke-roleless/{distro/ubuntu_18.04_podman start}
Command failed on smithi138 with status 5: 'sudo systemctl stop ceph-37bd2eb8-7396-11eb-901d-001a4aab830c@mon.smithi138'
octopus
octopus
master
smithi
ubuntu 18.04
rados/mgr/{clusters/{2-node-mgr} debug/mgr objectstore/bluestore-comp-lz4 supported-random-distro$/{ubuntu_latest} tasks/insights}
octopus
octopus
master
smithi
centos 8.1
rados/cephadm/upgrade/{1-start 2-repo_digest/defaut 3-start-upgrade 4-wait distro$/{centos_latest} fixed-2}
reached maximum tries (180) after waiting for 180 seconds
octopus
octopus
master
smithi
ubuntu 18.04
rados/cephadm/workunits/{distro/ubuntu_18.04_podman task/test_cephadm}
Command failed (workunit test cephadm/test_cephadm.sh) on smithi176 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=171a07719aa017f5b7103000f9d916d086c7324f TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh'
octopus
octopus
master
smithi
ubuntu 18.04
rados/cephadm/with-work/{distro/ubuntu_18.04_podman fixed-2 mode/root msgr/async start tasks/rados_api_tests}
Command failed on smithi133 with status 5: 'sudo systemctl stop ceph-d341abe2-7397-11eb-901d-001a4aab830c@mon.a'
octopus
octopus
master
smithi
ubuntu 18.04
rados/singleton/{all/thrash_cache_writeback_proxy_none msgr-failures/few msgr/async objectstore/filestore-xfs rados supported-random-distro$/{ubuntu_latest}}
Command crashed: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 400000 --objects 10000 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 600 --op read 100 --op write 50 --op delete 50 --op copy_from 50 --op write_excl 50 --pool base'
octopus
octopus
master
smithi
ubuntu 18.04
rados/mgr/{clusters/{2-node-mgr} debug/mgr objectstore/bluestore-comp-zlib supported-random-distro$/{ubuntu_latest} tasks/progress}
Test failure: test_osd_came_back (tasks.mgr.test_progress.TestProgress)
octopus
octopus
master
smithi
ubuntu 18.04
rados/cephadm/workunits/{distro/ubuntu_18.04_podman task/test_orch_cli}
Command failed on smithi180 with status 5: 'sudo systemctl stop ceph-95dcbcca-7396-11eb-901d-001a4aab830c@mon.a'
octopus
octopus
master
smithi
ubuntu 18.04
rados/cephadm/workunits/{distro/ubuntu_18.04_podman task/test_adoption}
Command failed (workunit test cephadm/test_adoption.sh) on smithi156 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=171a07719aa017f5b7103000f9d916d086c7324f TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_adoption.sh'