Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 5898857 2021-02-20 15:51:12 2021-02-20 15:52:24 2021-02-20 16:09:47 0:17:23 0:05:50 0:11:33 smithi master ubuntu 18.04 rados/cephadm/smoke/{distro/ubuntu_18.04_podman fixed-2 start} 2
Failure Reason:

Command failed on smithi064 with status 5: 'sudo systemctl stop ceph-c44d4d46-7395-11eb-901d-001a4aab830c@mon.a'

fail 5898858 2021-02-20 15:51:13 2021-02-20 15:52:24 2021-02-20 16:07:47 0:15:23 0:04:25 0:10:58 smithi master ubuntu 18.04 rados/cephadm/smoke-roleless/{distro/ubuntu_18.04_podman start} 2
Failure Reason:

Command failed on smithi007 with status 5: 'sudo systemctl stop ceph-79b45a04-7395-11eb-901d-001a4aab830c@mon.smithi007'

fail 5898859 2021-02-20 15:51:14 2021-02-20 15:52:25 2021-02-20 16:11:17 0:18:52 0:07:31 0:11:21 smithi master ubuntu 18.04 rados/mgr/{clusters/{2-node-mgr} debug/mgr objectstore/bluestore-hybrid supported-random-distro$/{ubuntu_latest} tasks/progress} 2
Failure Reason:

Test failure: test_osd_came_back (tasks.mgr.test_progress.TestProgress)

fail 5898860 2021-02-20 15:51:15 2021-02-20 15:53:55 2021-02-20 16:09:35 0:15:40 0:05:43 0:09:57 smithi master ubuntu 18.04 rados/cephadm/workunits/{distro/ubuntu_18.04_podman task/test_cephadm} 1
Failure Reason:

Command failed (workunit test cephadm/test_cephadm.sh) on smithi164 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=171a07719aa017f5b7103000f9d916d086c7324f TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh'

fail 5898861 2021-02-20 15:51:16 2021-02-20 15:53:55 2021-02-20 16:12:47 0:18:52 0:06:58 0:11:54 smithi master ubuntu 18.04 rados/cephadm/with-work/{distro/ubuntu_18.04_podman fixed-2 mode/root msgr/async-v1only start tasks/rados_python} 2
Failure Reason:

Command failed on smithi061 with status 5: 'sudo systemctl stop ceph-450e939a-7396-11eb-901d-001a4aab830c@mon.a'

fail 5898862 2021-02-20 15:51:16 2021-02-20 15:55:16 2021-02-20 16:11:22 0:16:06 0:05:17 0:10:49 smithi master ubuntu 18.04 rados/cephadm/workunits/{distro/ubuntu_18.04_podman task/test_orch_cli} 1
Failure Reason:

Command failed on smithi155 with status 5: 'sudo systemctl stop ceph-0a8f5d3a-7396-11eb-901d-001a4aab830c@mon.a'

fail 5898863 2021-02-20 15:51:17 2021-02-20 15:55:36 2021-02-20 16:13:08 0:17:32 0:07:46 0:09:46 smithi master ubuntu 18.04 rados/singleton/{all/pg-autoscaler-progress-off msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{ubuntu_latest}} 2
Failure Reason:

"2021-02-20T16:11:48.542158+0000 mon.a (mon.0) 300 : cluster [WRN] Health check failed: 1 pools have both target_size_bytes and target_size_ratio set (POOL_HAS_TARGET_SIZE_BYTES_AND_RATIO)" in cluster log

fail 5898864 2021-02-20 15:51:18 2021-02-20 15:55:36 2021-02-20 16:10:24 0:14:48 0:05:47 0:09:01 smithi master ubuntu 18.04 rados/cephadm/workunits/{distro/ubuntu_18.04_podman task/test_adoption} 1
Failure Reason:

Command failed (workunit test cephadm/test_adoption.sh) on smithi043 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=171a07719aa017f5b7103000f9d916d086c7324f TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_adoption.sh'

fail 5898865 2021-02-20 15:51:19 2021-02-20 15:55:36 2021-02-20 16:12:09 0:16:33 0:05:01 0:11:32 smithi master ubuntu 18.04 rados/cephadm/smoke/{distro/ubuntu_18.04_podman fixed-2 start} 2
Failure Reason:

Command failed on smithi027 with status 5: 'sudo systemctl stop ceph-31bdf20e-7396-11eb-901d-001a4aab830c@mon.a'

fail 5898866 2021-02-20 15:51:20 2021-02-20 15:56:47 2021-02-20 16:12:45 0:15:58 0:04:15 0:11:43 smithi master ubuntu 18.04 rados/cephadm/smoke-roleless/{distro/ubuntu_18.04_podman start} 2
Failure Reason:

Command failed on smithi138 with status 5: 'sudo systemctl stop ceph-37bd2eb8-7396-11eb-901d-001a4aab830c@mon.smithi138'

pass 5898867 2021-02-20 15:51:21 2021-02-20 15:57:17 2021-02-20 16:18:31 0:21:14 0:10:55 0:10:19 smithi master ubuntu 18.04 rados/mgr/{clusters/{2-node-mgr} debug/mgr objectstore/bluestore-comp-lz4 supported-random-distro$/{ubuntu_latest} tasks/insights} 2
fail 5898868 2021-02-20 15:51:21 2021-02-20 15:57:17 2021-02-20 16:21:45 0:24:28 0:16:46 0:07:42 smithi master centos 8.1 rados/cephadm/upgrade/{1-start 2-repo_digest/defaut 3-start-upgrade 4-wait distro$/{centos_latest} fixed-2} 2
Failure Reason:

reached maximum tries (180) after waiting for 180 seconds

fail 5898869 2021-02-20 15:51:22 2021-02-20 15:58:48 2021-02-20 16:14:10 0:15:22 0:05:40 0:09:42 smithi master ubuntu 18.04 rados/cephadm/workunits/{distro/ubuntu_18.04_podman task/test_cephadm} 1
Failure Reason:

Command failed (workunit test cephadm/test_cephadm.sh) on smithi176 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=171a07719aa017f5b7103000f9d916d086c7324f TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh'

fail 5898870 2021-02-20 15:51:23 2021-02-20 15:58:48 2021-02-20 16:24:05 0:25:17 0:06:58 0:18:19 smithi master ubuntu 18.04 rados/cephadm/with-work/{distro/ubuntu_18.04_podman fixed-2 mode/root msgr/async start tasks/rados_api_tests} 2
Failure Reason:

Command failed on smithi133 with status 5: 'sudo systemctl stop ceph-d341abe2-7397-11eb-901d-001a4aab830c@mon.a'

fail 5898871 2021-02-20 15:51:24 2021-02-20 15:59:28 2021-02-20 16:27:24 0:27:56 0:12:34 0:15:22 smithi master ubuntu 18.04 rados/singleton/{all/thrash_cache_writeback_proxy_none msgr-failures/few msgr/async objectstore/filestore-xfs rados supported-random-distro$/{ubuntu_latest}} 2
Failure Reason:

Command crashed: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 400000 --objects 10000 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 600 --op read 100 --op write 50 --op delete 50 --op copy_from 50 --op write_excl 50 --pool base'

fail 5898872 2021-02-20 15:51:25 2021-02-20 15:59:28 2021-02-20 16:17:59 0:18:31 0:07:34 0:10:57 smithi master ubuntu 18.04 rados/mgr/{clusters/{2-node-mgr} debug/mgr objectstore/bluestore-comp-zlib supported-random-distro$/{ubuntu_latest} tasks/progress} 2
Failure Reason:

Test failure: test_osd_came_back (tasks.mgr.test_progress.TestProgress)

fail 5898873 2021-02-20 15:51:25 2021-02-20 16:00:09 2021-02-20 16:15:16 0:15:07 0:05:26 0:09:41 smithi master ubuntu 18.04 rados/cephadm/workunits/{distro/ubuntu_18.04_podman task/test_orch_cli} 1
Failure Reason:

Command failed on smithi180 with status 5: 'sudo systemctl stop ceph-95dcbcca-7396-11eb-901d-001a4aab830c@mon.a'

fail 5898874 2021-02-20 15:51:26 2021-02-20 16:00:09 2021-02-20 16:16:41 0:16:32 0:05:45 0:10:47 smithi master ubuntu 18.04 rados/cephadm/workunits/{distro/ubuntu_18.04_podman task/test_adoption} 1
Failure Reason:

Command failed (workunit test cephadm/test_adoption.sh) on smithi156 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=171a07719aa017f5b7103000f9d916d086c7324f TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_adoption.sh'