Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 6024889 2021-04-06 17:44:56 2021-04-06 17:45:35 2021-04-06 18:08:00 0:22:25 0:09:06 0:13:19 smithi master ubuntu 18.04 rados/cephadm/smoke/{distro/ubuntu_18.04_podman fixed-2 start} 2
Failure Reason:

Command failed on smithi014 with status 5: 'sudo systemctl stop ceph-d10cad16-9702-11eb-8190-001a4aab830c@mon.a'

fail 6024890 2021-04-06 17:44:57 2021-04-06 17:48:04 2021-04-06 18:15:00 0:26:56 0:06:42 0:20:14 smithi master ubuntu 18.04 rados/cephadm/smoke-roleless/{distro/ubuntu_18.04_podman start} 2
Failure Reason:

Command failed on smithi003 with status 5: 'sudo systemctl stop ceph-bbb8c9f8-9703-11eb-8190-001a4aab830c@mon.smithi003'

pass 6024891 2021-04-06 17:44:58 2021-04-06 17:57:35 2021-04-06 18:43:45 0:46:10 0:23:37 0:22:33 smithi master centos 8.1 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-partial-recovery} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/off msgr-failures/osd-delay msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_latest} thrashers/careful thrashosds-health workloads/rados_api_tests} 2
pass 6024892 2021-04-06 17:44:59 2021-04-06 18:08:43 2021-04-06 23:45:29 5:36:46 5:20:44 0:16:02 smithi master ubuntu 18.04 rados/upgrade/mimic-x-singleton/{0-cluster/{openstack start} 1-install/mimic 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{rbd-cls rbd-import-export readwrite snaps-few-objects} 5-workload/{radosbench rbd_api} 6-finish-upgrade 7-octopus 8-workload/{rbd-python snaps-many-objects} bluestore-bitmap thrashosds-health ubuntu_latest} 4
pass 6024893 2021-04-06 17:45:00 2021-04-06 18:10:13 2021-04-06 18:35:48 0:25:35 0:19:00 0:06:35 smithi master rhel 7.7 rados/cephadm/upgrade/{1-start 2-repo_digest/defaut 3-start-upgrade 4-wait distro$/{rhel_7} fixed-2} 2
pass 6024894 2021-04-06 17:45:02 2021-04-06 18:10:14 2021-04-06 18:51:46 0:41:32 0:31:03 0:10:29 smithi master ubuntu 18.04 rados/basic/{ceph clusters/{fixed-2 openstack} msgr-failures/many msgr/async objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{ubuntu_latest} tasks/rados_workunit_loadgen_big} 2
fail 6024895 2021-04-06 17:45:02 2021-04-06 18:11:09 2021-04-06 18:29:36 0:18:27 0:07:56 0:10:31 smithi master ubuntu 18.04 rados/cephadm/workunits/{distro/ubuntu_18.04_podman task/test_cephadm} 1
Failure Reason:

Command failed (workunit test cephadm/test_cephadm.sh) on smithi173 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=281eafa199ce1c7aaf58b827a0f0677bfa1653c4 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh'

pass 6024896 2021-04-06 17:45:03 2021-04-06 18:11:10 2021-04-06 18:37:49 0:26:39 0:14:38 0:12:01 smithi master centos 8.1 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-partial-recovery} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/off msgr-failures/osd-delay msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_latest} thrashers/careful thrashosds-health workloads/set-chunk-promote-flush} 2
pass 6024897 2021-04-06 17:45:04 2021-04-06 18:11:10 2021-04-06 18:33:37 0:22:27 0:16:16 0:06:11 smithi master rhel 7.7 rados/cephadm/smoke/{distro/rhel_7 fixed-2 start} 2
pass 6024898 2021-04-06 17:45:05 2021-04-06 18:14:36 2021-04-06 18:39:53 0:25:17 0:16:28 0:08:49 smithi master rhel 7.7 rados/cephadm/smoke-roleless/{distro/rhel_7 start} 2
pass 6024899 2021-04-06 17:45:06 2021-04-06 18:14:36 2021-04-06 18:49:46 0:35:10 0:24:52 0:10:18 smithi master ubuntu 18.04 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-partial-recovery} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/crush-compat msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{ubuntu_latest} thrashers/mapgap thrashosds-health workloads/small-objects-balanced} 2
fail 6024900 2021-04-06 17:45:07 2021-04-06 18:14:37 2021-04-06 18:37:49 0:23:12 0:11:17 0:11:55 smithi master ubuntu 18.04 rados/cephadm/with-work/{distro/ubuntu_18.04_podman fixed-2 mode/root msgr/async-v1only start tasks/rados_python} 2
Failure Reason:

Command failed on smithi077 with status 5: 'sudo systemctl stop ceph-f192a0d2-9706-11eb-8190-001a4aab830c@mon.a'

fail 6024901 2021-04-06 17:45:08 2021-04-06 18:14:37 2021-04-06 18:35:49 0:21:12 0:09:58 0:11:14 smithi master ubuntu 18.04 rados/cephadm/workunits/{distro/ubuntu_18.04_podman task/test_orch_cli} 1
Failure Reason:

Command failed on smithi164 with status 5: 'sudo systemctl stop ceph-b2865b40-9706-11eb-8190-001a4aab830c@mon.a'

fail 6024902 2021-04-06 17:45:09 2021-04-06 18:14:38 2021-04-06 18:37:49 0:23:11 0:10:45 0:12:26 smithi master ubuntu 18.04 rados/cephadm/workunits/{distro/ubuntu_18.04_podman task/test_adoption} 1
Failure Reason:

Command failed (workunit test cephadm/test_adoption.sh) on smithi141 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=281eafa199ce1c7aaf58b827a0f0677bfa1653c4 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_adoption.sh'

pass 6024903 2021-04-06 17:45:11 2021-04-06 18:14:38 2021-04-06 18:54:33 0:39:55 0:30:52 0:09:03 smithi master rhel 8.2 rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/normal msgr-failures/few rados recovery-overrides/{more-active-recovery} supported-random-distro$/{rhel_latest} thrashers/morepggrow thrashosds-health workloads/ec-small-objects-overwrites} 2
pass 6024904 2021-04-06 17:45:12 2021-04-06 18:14:39 2021-04-06 18:47:45 0:33:06 0:24:30 0:08:36 smithi master rhel 8.2 rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} msgr-failures/osd-delay objectstore/bluestore-low-osd-mem-target rados recovery-overrides/{more-active-recovery} supported-random-distro$/{rhel_latest} thrashers/default thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4
pass 6024905 2021-04-06 17:45:13 2021-04-06 18:14:39 2021-04-06 18:54:34 0:39:55 0:26:16 0:13:39 smithi master ubuntu 18.04 rados/multimon/{clusters/21 msgr-failures/many msgr/async no_pools objectstore/bluestore-comp-zlib rados supported-random-distro$/{ubuntu_latest} tasks/mon_recovery} 3
fail 6024906 2021-04-06 17:45:13 2021-04-06 18:14:40 2021-04-06 18:35:49 0:21:09 0:09:28 0:11:41 smithi master ubuntu 18.04 rados/cephadm/smoke/{distro/ubuntu_18.04_podman fixed-2 start} 2
Failure Reason:

Command failed on smithi003 with status 5: 'sudo systemctl stop ceph-b18384ca-9706-11eb-8190-001a4aab830c@mon.a'

pass 6024907 2021-04-06 17:45:15 2021-04-06 18:15:11 2021-04-06 18:37:49 0:22:38 0:13:08 0:09:30 smithi master ubuntu 18.04 rados/perf/{ceph objectstore/bluestore-basic-min-osd-mem-target openstack settings/optimized ubuntu_latest workloads/fio_4M_rand_read} 1
fail 6024908 2021-04-06 17:45:16 2021-04-06 18:15:11 2021-04-06 18:34:40 0:19:29 0:08:35 0:10:54 smithi master ubuntu 18.04 rados/cephadm/smoke-roleless/{distro/ubuntu_18.04_podman start} 2
Failure Reason:

Command failed on smithi074 with status 5: 'sudo systemctl stop ceph-a876b2f8-9706-11eb-8190-001a4aab830c@mon.smithi074'

pass 6024909 2021-04-06 17:45:17 2021-04-06 18:15:11 2021-04-06 18:37:49 0:22:38 0:12:51 0:09:47 smithi master centos 8.1 rados/singleton/{all/test-crash msgr-failures/few msgr/async objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_latest}} 1
pass 6024910 2021-04-06 17:45:18 2021-04-06 18:15:12 2021-04-06 18:47:46 0:32:34 0:24:54 0:07:40 smithi master rhel 8.2 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-partial-recovery} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{rhel_latest} thrashers/pggrow thrashosds-health workloads/cache-agent-small} 2
pass 6024911 2021-04-06 17:45:19 2021-04-06 18:15:12 2021-04-06 18:45:46 0:30:34 0:20:49 0:09:45 smithi master rhel 7.7 rados/cephadm/upgrade/{1-start 2-repo_digest/defaut 3-start-upgrade 4-wait distro$/{rhel_7} fixed-2} 2
fail 6024912 2021-04-06 17:45:20 2021-04-06 18:15:13 2021-04-06 18:35:49 0:20:36 0:10:21 0:10:15 smithi master ubuntu 18.04 rados/cephadm/workunits/{distro/ubuntu_18.04_podman task/test_cephadm} 1
Failure Reason:

Command failed (workunit test cephadm/test_cephadm.sh) on smithi162 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=281eafa199ce1c7aaf58b827a0f0677bfa1653c4 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh'

fail 6024913 2021-04-06 17:45:21 2021-04-06 18:15:13 2021-04-06 18:37:49 0:22:36 0:11:22 0:11:14 smithi master ubuntu 18.04 rados/cephadm/with-work/{distro/ubuntu_18.04_podman fixed-2 mode/root msgr/async start tasks/rados_api_tests} 2
Failure Reason:

Command failed on smithi114 with status 5: 'sudo systemctl stop ceph-f5c4e25a-9706-11eb-8190-001a4aab830c@mon.a'

pass 6024914 2021-04-06 17:45:22 2021-04-06 18:15:13 2021-04-06 18:43:45 0:28:32 0:18:20 0:10:12 smithi master rhel 7.7 rados/cephadm/smoke/{distro/rhel_7 fixed-2 start} 2
pass 6024915 2021-04-06 17:45:22 2021-04-06 18:16:12 2021-04-06 18:40:55 0:24:43 0:16:39 0:08:04 smithi master rhel 7.7 rados/cephadm/smoke-roleless/{distro/rhel_7 start} 2
fail 6024916 2021-04-06 17:45:23 2021-04-06 18:16:13 2021-04-06 18:34:45 0:18:32 0:09:56 0:08:36 smithi master ubuntu 18.04 rados/cephadm/workunits/{distro/ubuntu_18.04_podman task/test_orch_cli} 1
Failure Reason:

Command failed on smithi002 with status 5: 'sudo systemctl stop ceph-b2f768b2-9706-11eb-8190-001a4aab830c@mon.a'

fail 6024917 2021-04-06 17:45:24 2021-04-06 18:17:03 2021-04-06 18:36:40 0:19:37 0:10:34 0:09:03 smithi master ubuntu 18.04 rados/cephadm/workunits/{distro/ubuntu_18.04_podman task/test_adoption} 1
Failure Reason:

Command failed (workunit test cephadm/test_adoption.sh) on smithi001 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=281eafa199ce1c7aaf58b827a0f0677bfa1653c4 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_adoption.sh'