Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 5892822 2021-02-18 17:53:52 2021-02-18 17:54:51 2021-02-18 18:19:04 0:24:13 0:12:01 0:12:12 smithi master ubuntu 18.04 rados/cephadm/smoke/{distro/ubuntu_18.04_podman fixed-2 start} 2
Failure Reason:

Command failed on smithi188 with status 5: 'sudo systemctl stop ceph-b673e954-7215-11eb-900e-001a4aab830c@mon.a'

fail 5892823 2021-02-18 17:53:53 2021-02-18 17:57:41 2021-02-18 18:15:36 0:17:55 0:04:20 0:13:35 smithi master ubuntu 18.04 rados/cephadm/smoke-roleless/{distro/ubuntu_18.04_podman start} 2
Failure Reason:

Command failed on smithi135 with status 5: 'sudo systemctl stop ceph-185b8d12-7215-11eb-900e-001a4aab830c@mon.smithi135'

fail 5892824 2021-02-18 17:53:54 2021-02-18 18:00:12 2021-02-18 18:20:17 0:20:05 0:07:38 0:12:27 smithi master ubuntu 18.04 rados/mgr/{clusters/{2-node-mgr} debug/mgr objectstore/bluestore-hybrid supported-random-distro$/{ubuntu_latest} tasks/progress} 2
Failure Reason:

Test failure: test_osd_came_back (tasks.mgr.test_progress.TestProgress)

fail 5892825 2021-02-18 17:53:55 2021-02-18 18:03:02 2021-02-18 18:40:07 0:37:05 0:22:21 0:14:44 smithi master rhel 8.0 rados/cephadm/upgrade/{1-start 2-repo_digest/defaut 3-start-upgrade 4-wait distro$/{rhel_8.0} fixed-2} 2
Failure Reason:

reached maximum tries (180) after waiting for 180 seconds

pass 5892826 2021-02-18 17:53:56 2021-02-18 18:09:35 2021-02-18 18:50:43 0:41:08 0:27:03 0:14:05 smithi master centos 8.1 rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} msgr-failures/few objectstore/bluestore-bitmap rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{centos_latest} thrashers/default thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} 2
fail 5892827 2021-02-18 17:53:57 2021-02-18 18:15:45 2021-02-18 18:30:29 0:14:44 0:05:48 0:08:56 smithi master ubuntu 18.04 rados/cephadm/workunits/{distro/ubuntu_18.04_podman task/test_cephadm} 1
Failure Reason:

Command failed (workunit test cephadm/test_cephadm.sh) on smithi032 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=ef0cbaddda96a295b3751035095dce0a63604552 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh'

fail 5892828 2021-02-18 17:53:57 2021-02-18 18:15:46 2021-02-18 18:35:14 0:19:28 0:07:37 0:11:51 smithi master ubuntu 18.04 rados/cephadm/with-work/{distro/ubuntu_18.04_podman fixed-2 mode/root msgr/async-v1only start tasks/rados_python} 2
Failure Reason:

Command failed on smithi133 with status 5: 'sudo systemctl stop ceph-ec4bf83a-7217-11eb-900e-001a4aab830c@mon.a'

fail 5892829 2021-02-18 17:53:58 2021-02-18 18:17:56 2021-02-18 18:33:20 0:15:24 0:05:18 0:10:06 smithi master ubuntu 18.04 rados/cephadm/workunits/{distro/ubuntu_18.04_podman task/test_orch_cli} 1
Failure Reason:

Command failed on smithi074 with status 5: 'sudo systemctl stop ceph-7c3e80b2-7217-11eb-900e-001a4aab830c@mon.a'

fail 5892830 2021-02-18 17:53:59 2021-02-18 18:17:56 2021-02-18 18:32:49 0:14:53 0:05:53 0:09:00 smithi master ubuntu 18.04 rados/cephadm/workunits/{distro/ubuntu_18.04_podman task/test_adoption} 1
Failure Reason:

Command failed (workunit test cephadm/test_adoption.sh) on smithi038 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=ef0cbaddda96a295b3751035095dce0a63604552 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_adoption.sh'

pass 5892831 2021-02-18 17:54:00 2021-02-18 18:17:57 2021-02-18 18:42:04 0:24:07 0:15:13 0:08:54 smithi master centos 8.1 rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} msgr-failures/fastclose objectstore/bluestore-comp-snappy rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{centos_latest} thrashers/mapgap thrashosds-health workloads/ec-rados-plugin=lrc-k=4-m=2-l=3} 3
pass 5892832 2021-02-18 17:54:00 2021-02-18 18:20:43 2021-02-18 19:18:11 0:57:28 0:43:33 0:13:55 smithi master ubuntu 18.04 rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/luminous-v1only backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} msgr-failures/few rados thrashers/careful thrashosds-health workloads/snaps-few-objects} 3
pass 5892833 2021-02-18 17:54:01 2021-02-18 18:22:24 2021-02-18 18:59:22 0:36:58 0:29:51 0:07:07 smithi master centos 8.1 rados/cephadm/with-work/{distro/centos_latest fixed-2 mode/root msgr/async-v2only start tasks/rados_api_tests} 2
fail 5892834 2021-02-18 17:54:02 2021-02-18 18:22:24 2021-02-18 18:50:05 0:27:41 0:06:01 0:21:40 smithi master ubuntu 18.04 rados/cephadm/smoke/{distro/ubuntu_18.04_podman fixed-2 start} 2
Failure Reason:

Command failed on smithi192 with status 5: 'sudo systemctl stop ceph-dae0abb6-7219-11eb-900e-001a4aab830c@mon.a'

fail 5892835 2021-02-18 17:54:03 2021-02-18 18:32:25 2021-02-18 18:47:53 0:15:28 0:04:37 0:10:51 smithi master ubuntu 18.04 rados/cephadm/smoke-roleless/{distro/ubuntu_18.04_podman start} 2
Failure Reason:

Command failed on smithi105 with status 5: 'sudo systemctl stop ceph-98e7cf6e-7219-11eb-900e-001a4aab830c@mon.smithi105'

fail 5892836 2021-02-18 17:54:04 2021-02-18 18:32:26 2021-02-18 18:55:19 0:22:53 0:16:32 0:06:21 smithi master centos 8.1 rados/cephadm/upgrade/{1-start 2-repo_digest/defaut 3-start-upgrade 4-wait distro$/{centos_latest} fixed-2} 2
Failure Reason:

reached maximum tries (180) after waiting for 180 seconds

fail 5892837 2021-02-18 17:54:04 2021-02-18 18:32:26 2021-02-18 18:47:39 0:15:13 0:05:48 0:09:25 smithi master ubuntu 18.04 rados/cephadm/workunits/{distro/ubuntu_18.04_podman task/test_cephadm} 1
Failure Reason:

Command failed (workunit test cephadm/test_cephadm.sh) on smithi038 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=ef0cbaddda96a295b3751035095dce0a63604552 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh'

fail 5892838 2021-02-18 17:54:05 2021-02-18 18:32:56 2021-02-18 18:52:33 0:19:37 0:07:04 0:12:33 smithi master ubuntu 18.04 rados/cephadm/with-work/{distro/ubuntu_18.04_podman fixed-2 mode/root msgr/async start tasks/rados_api_tests} 2
Failure Reason:

Command failed on smithi133 with status 5: 'sudo systemctl stop ceph-48dd8eea-721a-11eb-900e-001a4aab830c@mon.a'

fail 5892839 2021-02-18 17:54:06 2021-02-18 18:35:17 2021-02-18 19:00:35 0:25:18 0:13:52 0:11:26 smithi master centos 8.1 rados/singleton/{all/thrash_cache_writeback_proxy_none msgr-failures/few msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{centos_latest}} 2
Failure Reason:

Command crashed: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 400000 --objects 10000 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 600 --op read 100 --op write 50 --op delete 50 --op copy_from 50 --op write_excl 50 --pool base'

fail 5892840 2021-02-18 17:54:07 2021-02-18 18:39:18 2021-02-18 18:56:01 0:16:43 0:10:51 0:05:52 smithi master centos 8.1 rados/mgr/{clusters/{2-node-mgr} debug/mgr objectstore/bluestore-comp-zlib supported-random-distro$/{centos_latest} tasks/progress} 2
Failure Reason:

Test failure: test_osd_came_back (tasks.mgr.test_progress.TestProgress)

fail 5892841 2021-02-18 17:54:07 2021-02-18 18:39:18 2021-02-18 18:54:29 0:15:11 0:05:26 0:09:45 smithi master ubuntu 18.04 rados/cephadm/workunits/{distro/ubuntu_18.04_podman task/test_orch_cli} 1
Failure Reason:

Command failed on smithi074 with status 5: 'sudo systemctl stop ceph-810371ae-721a-11eb-900e-001a4aab830c@mon.a'

fail 5892842 2021-02-18 17:54:08 2021-02-18 18:39:18 2021-02-18 19:05:08 0:25:50 0:18:06 0:07:44 smithi master centos 8.1 rados/cephadm/upgrade/{1-start 2-repo_digest/repo_digest 3-start-upgrade 4-wait distro$/{centos_latest} fixed-2} 2
Failure Reason:

reached maximum tries (180) after waiting for 180 seconds

fail 5892843 2021-02-18 17:54:09 2021-02-18 18:40:19 2021-02-18 18:57:33 0:17:14 0:06:02 0:11:12 smithi master ubuntu 18.04 rados/cephadm/workunits/{distro/ubuntu_18.04_podman task/test_adoption} 1
Failure Reason:

Command failed (workunit test cephadm/test_adoption.sh) on smithi199 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=ef0cbaddda96a295b3751035095dce0a63604552 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_adoption.sh'