Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 5678533 2020-12-04 02:13:03 2020-12-04 02:14:52 2020-12-04 02:36:51 0:21:59 0:15:09 0:06:50 smithi master rhel 8.3 rados/cephadm/upgrade/{1-start 2-repo_digest/defaut 3-start-upgrade 4-wait distro$/{rhel_latest} fixed-2 mon_election/classic} 2
Failure Reason:

reached maximum tries (180) after waiting for 180 seconds

pass 5678534 2020-12-04 02:13:04 2020-12-04 02:14:52 2020-12-04 05:58:55 3:44:03 3:38:06 0:05:57 smithi master centos 8.2 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-low-osd-mem-target rados tasks/rados_cls_all validater/valgrind} 2
pass 5678535 2020-12-04 02:13:05 2020-12-04 02:14:52 2020-12-04 02:42:51 0:27:59 0:22:18 0:05:41 smithi master rhel 8.3 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-active-recovery} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/crush-compat mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-bitmap rados supported-random-distro$/{rhel_8} thrashers/default thrashosds-health workloads/small-objects} 2
pass 5678536 2020-12-04 02:13:06 2020-12-04 02:14:52 2020-12-04 02:42:51 0:27:59 0:17:40 0:10:19 smithi master ubuntu 18.04 rados/cephadm/upgrade/{1-start 2-repo_digest/repo_digest 3-start-upgrade 4-wait distro$/{ubuntu_18.04} fixed-2 mon_election/connectivity} 2
pass 5678537 2020-12-04 02:13:06 2020-12-04 02:14:52 2020-12-04 02:32:52 0:18:00 0:08:25 0:09:35 smithi master ubuntu 18.04 rados/cephadm/workunits/{distro/ubuntu_18.04_podman mon_election/connectivity task/test_adoption} 1
dead 5678538 2020-12-04 02:13:07 2020-12-04 02:14:52 2020-12-04 07:20:57 5:06:05 smithi master centos 8.2 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/filestore-xfs rados tasks/rados_api_tests validater/valgrind} 2
pass 5678539 2020-12-04 02:13:08 2020-12-04 02:14:52 2020-12-04 02:34:52 0:20:00 0:12:08 0:07:52 smithi master centos 8.2 rados/cephadm/workunits/{distro/centos_latest mon_election/classic task/test_cephadm} 1
pass 5678540 2020-12-04 02:13:09 2020-12-04 02:16:53 2020-12-04 04:38:55 2:22:02 2:12:25 0:09:37 smithi master ubuntu 18.04 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-recovery} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/crush-compat mon_election/connectivity msgr-failures/osd-delay msgr/async objectstore/bluestore-stupid rados supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/radosbench} 2
pass 5678541 2020-12-04 02:13:09 2020-12-04 02:16:53 2020-12-04 02:38:53 0:22:00 0:11:42 0:10:18 smithi master ubuntu 18.04 rados/cephadm/smoke-roleless/{distro/ubuntu_18.04 start} 2
pass 5678542 2020-12-04 02:13:10 2020-12-04 02:16:53 2020-12-04 02:44:53 0:28:00 0:18:13 0:09:47 smithi master ubuntu 18.04 rados/cephadm/upgrade/{1-start 2-repo_digest/defaut 3-start-upgrade 4-wait distro$/{ubuntu_18.04} fixed-2 mon_election/connectivity} 2
fail 5678543 2020-12-04 02:13:11 2020-12-04 02:16:53 2020-12-04 03:54:54 1:38:01 1:32:04 0:05:57 smithi master centos 8.2 rados/standalone/{mon_election/connectivity supported-random-distro$/{centos_8} workloads/scrub} 1
Failure Reason:

Command failed (workunit test scrub/osd-scrub-test.sh) on smithi066 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=382dfe8aba4747f44370a1d825be474a24f6902d TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/scrub/osd-scrub-test.sh'

pass 5678544 2020-12-04 02:13:12 2020-12-04 02:16:59 2020-12-04 02:38:59 0:22:00 0:11:25 0:10:35 smithi master ubuntu 18.04 rados/cephadm/smoke-roleless/{distro/ubuntu_latest start} 2
dead 5678545 2020-12-04 02:13:13 2020-12-04 02:17:42 2020-12-04 07:21:48 5:04:06 smithi master centos 8.2 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/on mon_election/classic msgr-failures/osd-delay msgr/async objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8} thrashers/mapgap thrashosds-health workloads/rados_api_tests} 2
fail 5678546 2020-12-04 02:13:13 2020-12-04 02:18:41 2020-12-04 02:38:40 0:19:59 0:13:46 0:06:13 smithi master centos 8.2 rados/cephadm/dashboard/{distro/centos_latest task/test_e2e} 2
Failure Reason:

Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi065 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=382dfe8aba4747f44370a1d825be474a24f6902d TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'

fail 5678547 2020-12-04 02:13:14 2020-12-04 02:18:41 2020-12-04 02:44:41 0:26:00 0:18:48 0:07:12 smithi master rhel 8.0 rados/cephadm/upgrade/{1-start 2-repo_digest/defaut 3-start-upgrade 4-wait distro$/{rhel_8.0} fixed-2 mon_election/classic} 2
Failure Reason:

reached maximum tries (180) after waiting for 180 seconds

pass 5678548 2020-12-04 02:13:15 2020-12-04 02:18:41 2020-12-04 02:32:40 0:13:59 0:06:53 0:07:06 smithi master centos 8.2 rados/cephadm/workunits/{distro/centos_latest mon_election/classic task/test_adoption} 1
pass 5678549 2020-12-04 02:13:16 2020-12-04 02:18:41 2020-12-04 02:52:41 0:34:00 0:25:57 0:08:03 smithi master rhel 8.3 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/on mon_election/classic msgr-failures/osd-delay msgr/async objectstore/bluestore-comp-zstd rados supported-random-distro$/{rhel_8} thrashers/mapgap thrashosds-health workloads/snaps-few-objects} 2
fail 5678550 2020-12-04 02:13:17 2020-12-04 02:18:41 2020-12-04 02:38:40 0:19:59 0:12:06 0:07:53 smithi master ubuntu 18.04 rados/cephadm/workunits/{distro/ubuntu_18.04_podman mon_election/connectivity task/test_cephadm} 1
Failure Reason:

Command failed (workunit test cephadm/test_cephadm.sh) on smithi058 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=382dfe8aba4747f44370a1d825be474a24f6902d TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh'

dead 5678551 2020-12-04 02:13:17 2020-12-04 02:18:41 2020-12-04 07:20:47 5:02:06 smithi master centos 8.2 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-zlib rados tasks/rados_api_tests validater/valgrind} 2
dead 5678552 2020-12-04 02:13:18 2020-12-04 02:18:43 2020-12-04 07:20:49 5:02:06 smithi master ubuntu 18.04 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-partial-recovery} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/on mon_election/classic msgr-failures/osd-delay msgr/async objectstore/bluestore-comp-lz4 rados supported-random-distro$/{ubuntu_latest} thrashers/mapgap thrashosds-health workloads/cache-pool-snaps} 2
pass 5678553 2020-12-04 02:13:19 2020-12-04 02:20:48 2020-12-04 02:48:47 0:27:59 0:17:18 0:10:41 smithi master ubuntu 18.04 rados/cephadm/upgrade/{1-start 2-repo_digest/repo_digest 3-start-upgrade 4-wait distro$/{ubuntu_latest} fixed-2 mon_election/connectivity} 2
pass 5678554 2020-12-04 02:13:20 2020-12-04 02:20:48 2020-12-04 02:52:47 0:31:59 0:25:24 0:06:35 smithi master centos 8.2 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-active-recovery} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/crush-compat mon_election/connectivity msgr-failures/fastclose msgr/async-v1only objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8} thrashers/morepggrow thrashosds-health workloads/cache-snaps-balanced} 2
pass 5678555 2020-12-04 02:13:21 2020-12-04 02:20:48 2020-12-04 02:50:47 0:29:59 0:24:33 0:05:26 smithi master centos 8.2 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-recovery} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/crush-compat mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/rados_api_tests} 2
dead 5678556 2020-12-04 02:13:22 2020-12-04 02:20:48 2020-12-04 07:20:53 5:00:05 smithi master centos 8.2 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/on mon_election/classic msgr-failures/osd-delay msgr/async objectstore/bluestore-stupid rados supported-random-distro$/{centos_8} thrashers/mapgap thrashosds-health workloads/radosbench-high-concurrency} 2
pass 5678557 2020-12-04 02:13:22 2020-12-04 02:20:48 2020-12-04 02:42:47 0:21:59 0:12:32 0:09:27 smithi master ubuntu 18.04 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-partial-recovery} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/on mon_election/classic msgr-failures/fastclose msgr/async-v1only objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/redirect_set_object} 2
dead 5678558 2020-12-04 02:13:23 2020-12-04 02:20:48 2020-12-04 07:24:53 5:04:05 4:51:56 0:12:09 smithi master ubuntu 18.04 rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/osd-delay objectstore/bluestore-comp-zstd rados recovery-overrides/{more-async-recovery} supported-random-distro$/{ubuntu_latest} thrashers/mapgap thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2}
Failure Reason:

psutil.NoSuchProcess process no longer exists (pid=9994)

fail 5678559 2020-12-04 02:13:24 2020-12-04 02:20:48 2020-12-04 02:40:47 0:19:59 0:11:17 0:08:42 smithi master ubuntu 18.04 rados/cephadm/workunits/{distro/ubuntu_18.04_podman mon_election/connectivity task/test_orch_cli} 1
Failure Reason:

Test failure: test_cluster_info (tasks.cephfs.test_nfs.TestNFS)

pass 5678560 2020-12-04 02:13:25 2020-12-04 02:20:55 2020-12-04 02:42:54 0:21:59 0:11:25 0:10:34 smithi master ubuntu 18.04 rados/cephadm/smoke-roleless/{distro/ubuntu_18.04 start} 2
dead 5678561 2020-12-04 02:13:26 2020-12-04 02:21:07 2020-12-04 07:21:13 5:00:06 smithi master ubuntu 18.04 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-recovery} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/on mon_election/classic msgr-failures/osd-delay msgr/async objectstore/bluestore-stupid rados supported-random-distro$/{ubuntu_latest} thrashers/mapgap thrashosds-health workloads/cache-snaps-balanced} 2
pass 5678562 2020-12-04 02:13:27 2020-12-04 02:22:46 2020-12-04 02:34:45 0:11:59 0:06:52 0:05:07 smithi master centos 8.2 rados/cephadm/workunits/{distro/centos_latest mon_election/connectivity task/test_adoption} 1
fail 5678563 2020-12-04 02:13:27 2020-12-04 02:22:46 2020-12-04 04:06:48 1:44:02 1:34:50 0:09:12 smithi master ubuntu 18.04 rados/standalone/{mon_election/classic supported-random-distro$/{ubuntu_latest} workloads/scrub} 1
Failure Reason:

Command failed (workunit test scrub/osd-scrub-test.sh) on smithi106 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=382dfe8aba4747f44370a1d825be474a24f6902d TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/scrub/osd-scrub-test.sh'

pass 5678564 2020-12-04 02:13:28 2020-12-04 02:22:46 2020-12-04 02:44:46 0:22:00 0:11:37 0:10:23 smithi master ubuntu 18.04 rados/cephadm/smoke-roleless/{distro/ubuntu_latest start} 2
fail 5678565 2020-12-04 02:13:29 2020-12-04 02:22:46 2020-12-04 02:40:46 0:18:00 0:08:00 0:10:00 smithi master ubuntu 18.04 rados/cephadm/workunits/{distro/ubuntu_18.04_podman mon_election/classic task/test_cephadm} 1
Failure Reason:

Command failed (workunit test cephadm/test_cephadm.sh) on smithi168 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=382dfe8aba4747f44370a1d825be474a24f6902d TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh'

fail 5678566 2020-12-04 02:13:30 2020-12-04 02:22:46 2020-12-04 02:42:46 0:20:00 0:14:17 0:05:43 smithi master centos 8.2 rados/cephadm/dashboard/{distro/centos_latest task/test_e2e} 2
Failure Reason:

Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi060 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=382dfe8aba4747f44370a1d825be474a24f6902d TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'