Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 5931934 2021-03-04 03:23:17 2021-03-04 06:26:09 2021-03-04 06:44:53 0:18:44 0:09:43 0:09:01 smithi master ubuntu 18.04 rados/dashboard/{clusters/{2-node-mgr} debug/mgr objectstore/filestore-xfs supported-random-distro$/{ubuntu_latest} tasks/dashboard} 2
Failure Reason:

Test failure: test_a_set_login_credentials (tasks.mgr.dashboard.test_auth.AuthTest)

fail 5931935 2021-03-04 03:23:18 2021-03-04 06:28:00 2021-03-04 06:41:16 0:13:16 0:04:21 0:08:55 smithi master ubuntu 18.04 rados/cephadm/smoke/{distro/ubuntu_18.04_podman fixed-2 start} 2
Failure Reason:

Command failed on smithi087 with status 5: 'sudo systemctl stop ceph-69d7283c-7cb4-11eb-9063-001a4aab830c@mon.a'

fail 5931936 2021-03-04 03:23:19 2021-03-04 06:29:51 2021-03-04 06:41:41 0:11:50 0:03:44 0:08:06 smithi master ubuntu 18.04 rados/cephadm/smoke-roleless/{distro/ubuntu_18.04_podman start} 2
Failure Reason:

Command failed on smithi112 with status 5: 'sudo systemctl stop ceph-63726c18-7cb4-11eb-9063-001a4aab830c@mon.smithi112'

fail 5931937 2021-03-04 03:23:19 2021-03-04 06:30:11 2021-03-04 06:48:51 0:18:40 0:09:42 0:08:58 smithi master ubuntu 18.04 rados/dashboard/{clusters/{2-node-mgr} debug/mgr objectstore/bluestore-bitmap supported-random-distro$/{ubuntu_latest} tasks/dashboard} 2
Failure Reason:

Test failure: test_a_set_login_credentials (tasks.mgr.dashboard.test_auth.AuthTest)

fail 5931938 2021-03-04 03:23:20 2021-03-04 06:31:11 2021-03-04 12:30:08 5:58:57 5:46:36 0:12:21 smithi master ubuntu 18.04 rados/upgrade/mimic-x-singleton/{0-cluster/{openstack start} 1-install/mimic 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{rbd-cls rbd-import-export readwrite snaps-few-objects} 5-workload/{radosbench rbd_api} 6-finish-upgrade 7-octopus 8-workload/{rbd-python snaps-many-objects} bluestore-bitmap thrashosds-health ubuntu_latest} 4
Failure Reason:

"2021-03-04T08:42:13.522918+0000 mon.a (mon.0) 17 : cluster [WRN] Health check failed: 1 filesystem is degraded (FS_DEGRADED)" in cluster log

fail 5931939 2021-03-04 03:23:21 2021-03-04 06:35:52 2021-03-04 07:03:09 0:27:17 0:19:53 0:07:24 smithi master ubuntu 18.04 rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/nautilus backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/off distro$/{ubuntu_18.04} msgr-failures/few rados thrashers/none thrashosds-health workloads/cache-snaps} 3
Failure Reason:

timeout expired in wait_until_healthy

fail 5931940 2021-03-04 03:23:22 2021-03-04 06:36:13 2021-03-04 06:53:22 0:17:09 0:09:27 0:07:42 smithi master rhel 8.2 rados/mgr/{clusters/{2-node-mgr} debug/mgr objectstore/bluestore-hybrid supported-random-distro$/{rhel_latest} tasks/progress} 2
Failure Reason:

Test failure: test_osd_came_back (tasks.mgr.test_progress.TestProgress)

fail 5931941 2021-03-04 03:23:22 2021-03-04 06:36:33 2021-03-04 06:56:33 0:20:00 0:12:19 0:07:41 smithi master rhel 8.2 rados/dashboard/{clusters/{2-node-mgr} debug/mgr objectstore/bluestore-comp-lz4 supported-random-distro$/{rhel_latest} tasks/dashboard} 2
Failure Reason:

Test failure: test_a_set_login_credentials (tasks.mgr.dashboard.test_auth.AuthTest)

fail 5931942 2021-03-04 03:23:23 2021-03-04 06:37:43 2021-03-04 07:07:26 0:29:43 0:19:29 0:10:14 smithi master ubuntu 18.04 rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/jewel-v1only backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} msgr-failures/osd-delay rados thrashers/none thrashosds-health workloads/radosbench} 3
Failure Reason:

timeout expired in wait_until_healthy

fail 5931943 2021-03-04 03:23:24 2021-03-04 06:39:34 2021-03-04 06:53:07 0:13:33 0:05:57 0:07:36 smithi master ubuntu 18.04 rados/cephadm/workunits/{distro/ubuntu_18.04_podman task/test_cephadm} 1
Failure Reason:

Command failed (workunit test cephadm/test_cephadm.sh) on smithi189 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=3664024eebddedeee285ff2f143b16556af4e85d TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh'

fail 5931944 2021-03-04 03:23:25 2021-03-04 06:39:34 2021-03-04 06:53:25 0:13:51 0:06:22 0:07:29 smithi master ubuntu 18.04 rados/cephadm/with-work/{distro/ubuntu_18.04_podman fixed-2 mode/root msgr/async-v1only start tasks/rados_python} 2
Failure Reason:

Command failed on smithi013 with status 5: 'sudo systemctl stop ceph-1c8108a8-7cb6-11eb-9063-001a4aab830c@mon.a'

fail 5931945 2021-03-04 03:23:26 2021-03-04 06:39:54 2021-03-04 06:59:23 0:19:29 0:09:57 0:09:32 smithi master centos 8.1 rados/dashboard/{clusters/{2-node-mgr} debug/mgr objectstore/bluestore-comp-snappy supported-random-distro$/{centos_latest} tasks/dashboard} 2
Failure Reason:

Test failure: test_a_set_login_credentials (tasks.mgr.dashboard.test_auth.AuthTest)

fail 5931946 2021-03-04 03:23:26 2021-03-04 06:40:25 2021-03-04 06:52:15 0:11:50 0:04:40 0:07:10 smithi master ubuntu 18.04 rados/cephadm/workunits/{distro/ubuntu_18.04_podman task/test_orch_cli} 1
Failure Reason:

Command failed on smithi184 with status 5: 'sudo systemctl stop ceph-facbd79c-7cb5-11eb-9063-001a4aab830c@mon.a'

fail 5931947 2021-03-04 03:23:27 2021-03-04 06:40:55 2021-03-04 07:06:34 0:25:39 0:19:19 0:06:20 smithi master rhel 8.0 rados/cephadm/upgrade/{1-start 2-repo_digest/repo_digest 3-start-upgrade 4-wait distro$/{rhel_8.0} fixed-2} 2
Failure Reason:

reached maximum tries (180) after waiting for 180 seconds

fail 5931948 2021-03-04 03:23:28 2021-03-04 06:41:15 2021-03-04 06:54:33 0:13:18 0:05:51 0:07:27 smithi master ubuntu 18.04 rados/cephadm/workunits/{distro/ubuntu_18.04_podman task/test_adoption} 1
Failure Reason:

Command failed (workunit test cephadm/test_adoption.sh) on smithi087 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=3664024eebddedeee285ff2f143b16556af4e85d TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_adoption.sh'

fail 5931949 2021-03-04 03:23:29 2021-03-04 06:41:26 2021-03-04 07:00:37 0:19:11 0:11:52 0:07:19 smithi master rhel 8.2 rados/dashboard/{clusters/{2-node-mgr} debug/mgr objectstore/bluestore-comp-zlib supported-random-distro$/{rhel_latest} tasks/dashboard} 2
Failure Reason:

Test failure: test_a_set_login_credentials (tasks.mgr.dashboard.test_auth.AuthTest)

fail 5931950 2021-03-04 03:23:29 2021-03-04 06:41:46 2021-03-04 06:54:09 0:12:23 0:04:22 0:08:01 smithi master ubuntu 18.04 rados/cephadm/smoke/{distro/ubuntu_18.04_podman fixed-2 start} 2
Failure Reason:

Command failed on smithi041 with status 5: 'sudo systemctl stop ceph-358c4dda-7cb6-11eb-9063-001a4aab830c@mon.a'

fail 5931951 2021-03-04 03:23:30 2021-03-04 06:42:36 2021-03-04 06:54:00 0:11:24 0:03:41 0:07:43 smithi master ubuntu 18.04 rados/cephadm/smoke-roleless/{distro/ubuntu_18.04_podman start} 2
Failure Reason:

Command failed on smithi082 with status 5: 'sudo systemctl stop ceph-19382b7c-7cb6-11eb-9063-001a4aab830c@mon.smithi082'

fail 5931952 2021-03-04 03:23:31 2021-03-04 06:42:36 2021-03-04 07:00:51 0:18:15 0:09:38 0:08:37 smithi master ubuntu 18.04 rados/dashboard/{clusters/{2-node-mgr} debug/mgr objectstore/bluestore-comp-zstd supported-random-distro$/{ubuntu_latest} tasks/dashboard} 2
Failure Reason:

Test failure: test_a_set_login_credentials (tasks.mgr.dashboard.test_auth.AuthTest)

pass 5931953 2021-03-04 03:23:32 2021-03-04 06:43:17 2021-03-04 07:12:23 0:29:06 0:22:21 0:06:45 smithi master rhel 8.2 rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal msgr-failures/osd-delay objectstore/bluestore-comp-lz4 rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{rhel_latest} thrashers/morepggrow thrashosds-health workloads/ec-small-objects-fast-read} 2
pass 5931954 2021-03-04 03:23:33 2021-03-04 06:43:27 2021-03-04 06:57:01 0:13:34 0:07:42 0:05:52 smithi master ubuntu 18.04 rados/singleton/{all/test-crash msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{ubuntu_latest}} 1
pass 5931955 2021-03-04 03:23:34 2021-03-04 06:43:27 2021-03-04 07:16:23 0:32:56 0:27:22 0:05:34 smithi master rhel 8.2 rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} msgr-failures/osd-delay objectstore/bluestore-comp-zstd rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{rhel_latest} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} 2
pass 5931956 2021-03-04 03:23:34 2021-03-04 06:43:28 2021-03-04 07:00:42 0:17:14 0:11:29 0:05:45 smithi master rhel 8.2 rados/basic/{ceph clusters/{fixed-2 openstack} msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-snappy rados supported-random-distro$/{rhel_latest} tasks/scrub_test} 2
pass 5931957 2021-03-04 03:23:35 2021-03-04 06:43:48 2021-03-04 07:19:24 0:35:36 0:28:58 0:06:38 smithi master rhel 8.2 rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/fast msgr-failures/osd-delay rados recovery-overrides/{more-async-recovery} supported-random-distro$/{rhel_latest} thrashers/pggrow thrashosds-health workloads/ec-snaps-few-objects-overwrites} 2
fail 5931958 2021-03-04 03:23:36 2021-03-04 06:44:59 2021-03-04 07:05:14 0:20:15 0:09:50 0:10:25 smithi master centos 8.1 rados/dashboard/{clusters/{2-node-mgr} debug/mgr objectstore/bluestore-hybrid supported-random-distro$/{centos_latest} tasks/dashboard} 2
Failure Reason:

Test failure: test_a_set_login_credentials (tasks.mgr.dashboard.test_auth.AuthTest)

fail 5931959 2021-03-04 03:23:37 2021-03-04 06:45:49 2021-03-04 06:59:23 0:13:34 0:05:51 0:07:43 smithi master ubuntu 18.04 rados/cephadm/workunits/{distro/ubuntu_18.04_podman task/test_cephadm} 1
Failure Reason:

Command failed (workunit test cephadm/test_cephadm.sh) on smithi175 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=3664024eebddedeee285ff2f143b16556af4e85d TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh'

pass 5931960 2021-03-04 03:23:38 2021-03-04 06:45:59 2021-03-04 07:13:16 0:27:17 0:20:15 0:07:02 smithi master rhel 8.2 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-partial-recovery} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/crush-compat msgr-failures/few msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{rhel_latest} thrashers/mapgap thrashosds-health workloads/cache-snaps-balanced} 2
fail 5931961 2021-03-04 03:23:38 2021-03-04 06:46:00 2021-03-04 06:59:37 0:13:37 0:06:17 0:07:20 smithi master ubuntu 18.04 rados/cephadm/with-work/{distro/ubuntu_18.04_podman fixed-2 mode/root msgr/async start tasks/rados_api_tests} 2
Failure Reason:

Command failed on smithi096 with status 5: 'sudo systemctl stop ceph-f71b1652-7cb6-11eb-9063-001a4aab830c@mon.a'

fail 5931962 2021-03-04 03:23:39 2021-03-04 06:46:20 2021-03-04 07:09:12 0:22:52 0:10:07 0:12:45 smithi master centos 8.1 rados/singleton/{all/thrash_cache_writeback_proxy_none msgr-failures/few msgr/async objectstore/filestore-xfs rados supported-random-distro$/{centos_latest}} 2
Failure Reason:

Command crashed: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 400000 --objects 10000 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 600 --op read 100 --op write 50 --op delete 50 --op copy_from 50 --op write_excl 50 --pool base'

fail 5931963 2021-03-04 03:23:40 2021-03-04 06:47:40 2021-03-04 07:07:15 0:19:35 0:12:17 0:07:18 smithi master rhel 8.2 rados/dashboard/{clusters/{2-node-mgr} debug/mgr objectstore/bluestore-low-osd-mem-target supported-random-distro$/{rhel_latest} tasks/dashboard} 2
Failure Reason:

Test failure: test_a_set_login_credentials (tasks.mgr.dashboard.test_auth.AuthTest)

fail 5931964 2021-03-04 03:23:41 2021-03-04 06:47:41 2021-03-04 07:16:25 0:28:44 0:19:58 0:08:46 smithi master ubuntu 18.04 rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus-v1only backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} msgr-failures/osd-delay rados thrashers/none thrashosds-health workloads/rbd_cls} 3
Failure Reason:

timeout expired in wait_until_healthy

fail 5931965 2021-03-04 03:23:41 2021-03-04 06:48:21 2021-03-04 07:01:41 0:13:20 0:06:54 0:06:26 smithi master ubuntu 18.04 rados/mgr/{clusters/{2-node-mgr} debug/mgr objectstore/bluestore-comp-zlib supported-random-distro$/{ubuntu_latest} tasks/progress} 2
Failure Reason:

Test failure: test_osd_came_back (tasks.mgr.test_progress.TestProgress)

fail 5931966 2021-03-04 03:23:42 2021-03-04 06:48:21 2021-03-04 07:00:16 0:11:55 0:04:39 0:07:16 smithi master ubuntu 18.04 rados/cephadm/workunits/{distro/ubuntu_18.04_podman task/test_orch_cli} 1
Failure Reason:

Command failed on smithi083 with status 5: 'sudo systemctl stop ceph-1816428c-7cb7-11eb-9063-001a4aab830c@mon.a'

fail 5931967 2021-03-04 03:23:43 2021-03-04 06:49:02 2021-03-04 07:04:46 0:15:44 0:09:30 0:06:14 smithi master ubuntu 18.04 rados/dashboard/{clusters/{2-node-mgr} debug/mgr objectstore/bluestore-stupid supported-random-distro$/{ubuntu_latest} tasks/dashboard} 2
Failure Reason:

Test failure: test_a_set_login_credentials (tasks.mgr.dashboard.test_auth.AuthTest)

fail 5931968 2021-03-04 03:23:44 2021-03-04 06:49:12 2021-03-04 07:16:45 0:27:33 0:19:46 0:07:47 smithi master rhel 8.0 rados/cephadm/upgrade/{1-start 2-repo_digest/repo_digest 3-start-upgrade 4-wait distro$/{rhel_8.0} fixed-2} 2
Failure Reason:

reached maximum tries (180) after waiting for 180 seconds

fail 5931969 2021-03-04 03:23:45 2021-03-04 06:49:32 2021-03-04 07:05:28 0:15:56 0:05:57 0:09:59 smithi master ubuntu 18.04 rados/cephadm/workunits/{distro/ubuntu_18.04_podman task/test_adoption} 1
Failure Reason:

Command failed (workunit test cephadm/test_adoption.sh) on smithi184 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=3664024eebddedeee285ff2f143b16556af4e85d TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_adoption.sh'

fail 5931970 2021-03-04 03:23:45 2021-03-04 06:52:23 2021-03-04 07:20:53 0:28:30 0:19:45 0:08:45 smithi master ubuntu 18.04 rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/nautilus-v2only backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/off distro$/{ubuntu_18.04} msgr-failures/fastclose rados thrashers/pggrow thrashosds-health workloads/snaps-few-objects} 3
Failure Reason:

timeout expired in wait_until_healthy