Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 5671807 2020-12-01 10:32:05 2020-12-02 01:57:59 2020-12-02 02:11:59 0:14:00 0:02:15 0:11:45 smithi master ubuntu 18.04 rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/hammer backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/off distro$/{ubuntu_18.04} msgr-failures/few rados thrashers/none thrashosds-health workloads/snaps-few-objects} 3
Failure Reason:

Failed to fetch package version from https://shaman.ceph.com/api/search/?status=ready&project=ceph&flavor=default&distros=ubuntu%2F18.04%2Fx86_64&ref=hammer

fail 5671808 2020-12-01 10:32:06 2020-12-02 01:58:16 2020-12-02 02:12:16 0:14:00 0:06:00 0:08:00 smithi master centos 8.1 rados/cephadm/workunits/{distro/centos_latest task/test_adoption} 1
Failure Reason:

Command failed (workunit test cephadm/test_adoption.sh) on smithi043 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=0cf3929013af16873ab017bd7f19ea6a23487d2f TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_adoption.sh'

fail 5671809 2020-12-01 10:32:07 2020-12-02 01:58:17 2020-12-04 08:00:21 2 days, 6:02:04 0:06:10 2 days, 5:55:54 smithi master ubuntu 18.04 rados/cephadm/workunits/{distro/ubuntu_18.04_podman task/test_cephadm} 1
Failure Reason:

Command failed (workunit test cephadm/test_cephadm.sh) on smithi092 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=0cf3929013af16873ab017bd7f19ea6a23487d2f TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh'

fail 5671810 2020-12-01 10:32:08 2020-12-02 01:59:21 2020-12-02 02:17:21 0:18:00 0:10:45 0:07:15 smithi master ubuntu 18.04 rados/cephadm/workunits/{distro/ubuntu_18.04_podman task/test_orch_cli} 1
Failure Reason:

Test failure: test_cluster_info (tasks.cephfs.test_nfs.TestNFS)

fail 5671811 2020-12-01 10:32:09 2020-12-02 01:59:38 2020-12-02 02:25:37 0:25:59 0:18:30 0:07:29 smithi master rhel 8.0 rados/cephadm/upgrade/{1-start 2-repo_digest/repo_digest 3-start-upgrade 4-wait distro$/{rhel_8.0} fixed-2} 2
Failure Reason:

reached maximum tries (180) after waiting for 180 seconds

fail 5671812 2020-12-01 10:32:09 2020-12-02 02:00:14 2020-12-02 02:12:13 0:11:59 0:05:00 0:06:59 smithi master ubuntu 18.04 rados/cephadm/workunits/{distro/ubuntu_18.04_podman task/test_adoption} 1
Failure Reason:

Command failed (workunit test cephadm/test_adoption.sh) on smithi142 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=0cf3929013af16873ab017bd7f19ea6a23487d2f TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_adoption.sh'

fail 5671813 2020-12-01 10:32:10 2020-12-02 02:01:14 2020-12-02 02:13:14 0:12:00 0:05:45 0:06:15 smithi master centos 8.1 rados/cephadm/workunits/{distro/centos_latest task/test_cephadm} 1
Failure Reason:

Command failed (workunit test cephadm/test_cephadm.sh) on smithi164 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=0cf3929013af16873ab017bd7f19ea6a23487d2f TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh'

fail 5671814 2020-12-01 10:32:11 2020-12-02 02:01:14 2020-12-02 02:11:14 0:10:00 0:03:33 0:06:27 smithi master ubuntu 18.04 rados/cephadm/upgrade/{1-start 2-repo_digest/defaut 3-start-upgrade 4-wait distro$/{ubuntu_18.04} fixed-2} 2
Failure Reason:

Command failed on smithi081 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v15.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 5b4dc9d8-3443-11eb-980d-001a4aab830c -- ceph orch host add smithi081'

fail 5671815 2020-12-01 10:32:12 2020-12-02 02:01:36 2020-12-02 02:13:36 0:12:00 0:06:01 0:05:59 smithi master centos 8.1 rados/cephadm/workunits/{distro/centos_latest task/test_adoption} 1
Failure Reason:

Command failed (workunit test cephadm/test_adoption.sh) on smithi052 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=0cf3929013af16873ab017bd7f19ea6a23487d2f TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_adoption.sh'

fail 5671816 2020-12-01 10:32:13 2020-12-02 02:01:45 2020-12-02 02:19:44 0:17:59 0:10:11 0:07:48 smithi master ubuntu 18.04 rados/singleton/{all/thrash_cache_writeback_proxy_none msgr-failures/few msgr/async objectstore/filestore-xfs rados supported-random-distro$/{ubuntu_latest}} 2
Failure Reason:

Command crashed: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 400000 --objects 10000 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 600 --op read 100 --op write 50 --op delete 50 --op copy_from 50 --op write_excl 50 --pool base'

fail 5671817 2020-12-01 10:32:14 2020-12-02 02:01:55 2020-12-02 02:13:54 0:11:59 0:05:08 0:06:51 smithi master ubuntu 18.04 rados/cephadm/workunits/{distro/ubuntu_18.04_podman task/test_cephadm} 1
Failure Reason:

Command failed (workunit test cephadm/test_cephadm.sh) on smithi018 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=0cf3929013af16873ab017bd7f19ea6a23487d2f TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh'

fail 5671818 2020-12-01 10:32:15 2020-12-02 02:02:01 2020-12-02 02:18:01 0:16:00 0:10:45 0:05:15 smithi master ubuntu 18.04 rados/cephadm/workunits/{distro/ubuntu_18.04_podman task/test_orch_cli} 1
Failure Reason:

Test failure: test_cluster_info (tasks.cephfs.test_nfs.TestNFS)

fail 5671819 2020-12-01 10:32:15 2020-12-02 02:02:40 2020-12-02 02:14:39 0:11:59 0:05:06 0:06:53 smithi master ubuntu 18.04 rados/cephadm/workunits/{distro/ubuntu_18.04_podman task/test_adoption} 1
Failure Reason:

Command failed (workunit test cephadm/test_adoption.sh) on smithi125 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=0cf3929013af16873ab017bd7f19ea6a23487d2f TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_adoption.sh'

fail 5671820 2020-12-01 10:32:16 2020-12-02 02:02:40 2020-12-02 02:14:39 0:11:59 0:05:54 0:06:05 smithi master centos 8.1 rados/cephadm/workunits/{distro/centos_latest task/test_cephadm} 1
Failure Reason:

Command failed (workunit test cephadm/test_cephadm.sh) on smithi204 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=0cf3929013af16873ab017bd7f19ea6a23487d2f TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh'

fail 5671821 2020-12-01 10:32:17 2020-12-02 02:02:52 2020-12-02 02:20:51 0:17:59 0:02:20 0:15:39 smithi master ubuntu 18.04 rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/hammer backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/off distro$/{ubuntu_18.04} msgr-failures/osd-delay rados thrashers/mapgap thrashosds-health workloads/snaps-few-objects} 3
Failure Reason:

Failed to fetch package version from https://shaman.ceph.com/api/search/?status=ready&project=ceph&flavor=default&distros=ubuntu%2F18.04%2Fx86_64&ref=hammer

fail 5671822 2020-12-01 10:32:18 2020-12-02 02:02:53 2020-12-02 02:10:53 0:08:00 0:02:05 0:05:55 smithi master ubuntu 18.04 rados/cephadm/upgrade/{1-start 2-repo_digest/defaut 3-start-upgrade 4-wait distro$/{ubuntu_18.04_podman} fixed-2} 2
Failure Reason:

Command failed on smithi058 with status 5: 'sudo systemctl stop ceph-93310112-3443-11eb-980d-001a4aab830c@mon.a'