Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 5819911 2021-01-23 14:54:18 2021-01-23 14:54:41 2021-01-23 15:22:41 0:28:00 0:17:21 0:10:39 smithi master centos 8.2 rados/cephadm/upgrade/{1-start 2-repo_digest/defaut 3-start-upgrade 4-wait distro$/{centos_latest} fixed-2 mon_election/classic} 2
Failure Reason:

reached maximum tries (180) after waiting for 180 seconds

fail 5819912 2021-01-23 14:54:19 2021-01-23 14:54:46 2021-01-23 15:38:46 0:44:00 0:32:41 0:11:19 smithi master ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/octopus backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/classic msgr-failures/few rados thrashers/morepggrow thrashosds-health workloads/rbd_cls} 3
Failure Reason:

Command crashed: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'ceph_test_cls_rbd --gtest_filter=-TestClsRbd.get_features:TestClsRbd.parents:TestClsRbd.mirror'"

fail 5819913 2021-01-23 14:54:20 2021-01-23 14:56:12 2021-01-23 15:18:12 0:22:00 0:11:55 0:10:05 smithi master ubuntu 18.04 rados/cephadm/workunits/{distro/ubuntu_18.04_podman mon_election/connectivity task/test_cephadm} 1
Failure Reason:

Command failed (workunit test cephadm/test_cephadm.sh) on smithi172 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=58c3155eefac91c730f7b6fde0bfea039d6d8deb TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh'

dead 5819914 2021-01-23 14:54:21 2021-01-23 14:56:47 2021-01-24 02:59:17 12:02:30 smithi master ubuntu 18.04 rados/cephadm/upgrade/{1-start 2-repo_digest/repo_digest 3-start-upgrade 4-wait distro$/{ubuntu_18.04_podman} fixed-2 mon_election/connectivity} 2
fail 5819915 2021-01-23 14:54:21 2021-01-23 14:57:33 2021-01-23 15:21:33 0:24:00 0:15:29 0:08:31 smithi master ubuntu 18.04 rados/cephadm/with-work/{distro/ubuntu_18.04_podman fixed-2 mode/root mon_election/connectivity msgr/async-v2only start tasks/rados_python} 2
Failure Reason:

Expecting ':' delimiter: line 1 column 212992 (char 212991)

fail 5819916 2021-01-23 14:54:22 2021-01-23 14:58:06 2021-01-23 15:22:05 0:23:59 0:13:09 0:10:50 smithi master ubuntu 18.04 rados/cephadm/smoke/{distro/ubuntu_18.04_podman fixed-2 mon_election/connectivity start} 2
Failure Reason:

Unterminated string starting at: line 1 column 229368 (char 229367)

dead 5819917 2021-01-23 14:54:23 2021-01-23 14:58:06 2021-01-24 03:00:36 12:02:30 smithi master ubuntu 18.04 rados/cephadm/upgrade/{1-start 2-repo_digest/repo_digest 3-start-upgrade 4-wait distro$/{ubuntu_18.04} fixed-2 mon_election/classic} 2
fail 5819918 2021-01-23 14:54:24 2021-01-23 14:58:29 2021-01-23 15:24:29 0:26:00 0:16:37 0:09:23 smithi master ubuntu 18.04 rados/cephadm/workunits/{distro/ubuntu_18.04_podman mon_election/connectivity task/test_orch_cli} 1
Failure Reason:

Test failure: test_daemon_restart (tasks.cephadm_cases.test_cli.TestCephadmCLI)

dead 5819919 2021-01-23 14:54:24 2021-01-23 15:00:47 2021-01-24 03:03:12 12:02:25 smithi master centos 8.2 rados/singleton-nomsgr/{all/msgr mon_election/classic rados supported-random-distro$/{centos_8}} 1
fail 5819920 2021-01-23 14:54:25 2021-01-23 15:02:04 2021-01-23 16:00:04 0:58:00 0:36:59 0:21:01 smithi master ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/nautilus-v2only backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/classic msgr-failures/few rados thrashers/morepggrow thrashosds-health workloads/snaps-few-objects} 3
Failure Reason:

Command crashed: 'CEPH_CLIENT_ID=2 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 4000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op read 100 --op write 50 --op delete 50 --op snap_create 50 --op snap_remove 50 --op rollback 50 --op copy_from 50 --op write_excl 50 --pool unique_pool_0'

pass 5819921 2021-01-23 14:54:26 2021-01-23 15:02:29 2021-01-23 15:24:29 0:22:00 0:11:47 0:10:13 smithi master ubuntu 18.04 rados/cephadm/smoke-roleless/{distro/ubuntu_18.04_podman start} 2
dead 5819922 2021-01-23 14:54:27 2021-01-23 15:02:37 2021-01-24 03:05:07 12:02:30 smithi master ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/osd-delay rados thrashers/none thrashosds-health workloads/test_rbd_api} 3
fail 5819923 2021-01-23 14:54:27 2021-01-23 15:02:41 2021-01-23 15:30:41 0:28:00 0:17:13 0:10:47 smithi master centos 8.2 rados/cephadm/upgrade/{1-start 2-repo_digest/defaut 3-start-upgrade 4-wait distro$/{centos_latest} fixed-2 mon_election/connectivity} 2
Failure Reason:

reached maximum tries (180) after waiting for 180 seconds

pass 5819924 2021-01-23 14:54:28 2021-01-23 15:04:54 2021-01-23 16:00:54 0:56:00 0:46:13 0:09:47 smithi master centos 8.2 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-partial-recovery} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{centos_8} thrashers/mapgap thrashosds-health workloads/radosbench-high-concurrency} 2
fail 5819925 2021-01-23 14:54:29 2021-01-23 15:04:54 2021-01-23 15:50:54 0:46:00 0:35:45 0:10:15 smithi master ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/octopus backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/classic msgr-failures/fastclose rados thrashers/pggrow thrashosds-health workloads/cache-snaps} 3
Failure Reason:

Command crashed: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'sudo ceph osd pool create base 4'"

fail 5819926 2021-01-23 14:54:30 2021-01-23 15:06:47 2021-01-23 15:30:47 0:24:00 0:12:48 0:11:12 smithi master ubuntu 18.04 rados/cephadm/smoke/{distro/ubuntu_18.04_podman fixed-2 mon_election/classic start} 2
Failure Reason:

Unterminated string starting at: line 1 column 294905 (char 294904)

fail 5819927 2021-01-23 14:54:31 2021-01-23 15:06:47 2021-01-23 15:28:47 0:22:00 0:11:53 0:10:07 smithi master ubuntu 18.04 rados/cephadm/workunits/{distro/ubuntu_18.04_podman mon_election/classic task/test_cephadm} 1
Failure Reason:

Command failed (workunit test cephadm/test_cephadm.sh) on smithi105 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=58c3155eefac91c730f7b6fde0bfea039d6d8deb TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh'

pass 5819928 2021-01-23 14:54:31 2021-01-23 15:06:47 2021-01-23 15:34:47 0:28:00 0:18:44 0:09:16 smithi master centos 8.2 rados/cephadm/dashboard/{distro/centos_latest task/test_e2e} 2
fail 5819929 2021-01-23 14:54:32 2021-01-23 15:06:54 2021-01-23 15:28:54 0:22:00 0:16:24 0:05:36 smithi master rhel 8.3 rados/cephadm/upgrade/{1-start 2-repo_digest/defaut 3-start-upgrade 4-wait distro$/{rhel_latest} fixed-2 mon_election/classic} 2
Failure Reason:

reached maximum tries (180) after waiting for 180 seconds

fail 5819930 2021-01-23 14:54:33 2021-01-23 15:07:30 2021-01-23 15:33:29 0:25:59 0:15:19 0:10:40 smithi master ubuntu 18.04 rados/cephadm/with-work/{distro/ubuntu_18.04_podman fixed-2 mode/root mon_election/connectivity msgr/async-v1only start tasks/rados_python} 2
Failure Reason:

Expecting value: line 1 column 212992 (char 212991)

pass 5819931 2021-01-23 14:54:34 2021-01-23 15:08:34 2021-01-23 15:40:34 0:32:00 0:21:51 0:10:09 smithi master centos 8.2 rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/fastclose objectstore/filestore-xfs rados recovery-overrides/{more-async-recovery} supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/ec-small-objects-many-deletes} 2
fail 5819932 2021-01-23 14:54:34 2021-01-23 15:08:35 2021-01-23 15:36:34 0:27:59 0:17:26 0:10:33 smithi master centos 8.0 rados/cephadm/upgrade/{1-start 2-repo_digest/repo_digest 3-start-upgrade 4-wait distro$/{centos_8.0} fixed-2 mon_election/connectivity} 2
Failure Reason:

reached maximum tries (180) after waiting for 180 seconds

fail 5819933 2021-01-23 14:54:35 2021-01-23 15:08:46 2021-01-23 15:34:45 0:25:59 0:16:49 0:09:10 smithi master ubuntu 18.04 rados/cephadm/workunits/{distro/ubuntu_18.04_podman mon_election/classic task/test_orch_cli} 1
Failure Reason:

Test failure: test_daemon_restart (tasks.cephadm_cases.test_cli.TestCephadmCLI)

fail 5819934 2021-01-23 14:54:36 2021-01-23 15:08:58 2021-01-23 15:32:58 0:24:00 0:13:06 0:10:54 smithi master ubuntu 18.04 rados/cephadm/smoke/{distro/ubuntu_18.04_podman fixed-2 mon_election/connectivity start} 2
Failure Reason:

Unterminated string starting at: line 1 column 245758 (char 245757)

fail 5819935 2021-01-23 14:54:37 2021-01-23 15:11:03 2021-01-23 16:29:04 1:18:01 1:06:52 0:11:09 smithi master ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/nautilus-v2only backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/classic msgr-failures/fastclose rados thrashers/none thrashosds-health workloads/radosbench} 3
Failure Reason:

Command failed on smithi080 with status 1: "/bin/sh -c 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage rados --no-log-to-stderr --name client.2 -b 65536 --object-size 65536 -p unique_pool_0 bench 90 write'"

fail 5819936 2021-01-23 14:54:37 2021-01-23 15:11:04 2021-01-23 15:37:03 0:25:59 0:19:03 0:06:56 smithi master rhel 8.0 rados/cephadm/upgrade/{1-start 2-repo_digest/repo_digest 3-start-upgrade 4-wait distro$/{rhel_8.0} fixed-2 mon_election/classic} 2
Failure Reason:

reached maximum tries (180) after waiting for 180 seconds

dead 5819937 2021-01-23 14:54:38 2021-01-23 15:11:04 2021-01-24 03:07:19 11:56:15 smithi master ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/few rados thrashers/pggrow thrashosds-health workloads/rbd_cls} 3
dead 5819938 2021-01-23 14:54:39 2021-01-23 15:15:28 2021-01-24 03:07:43 11:52:15 smithi master rhel 8.3 rados/singleton-nomsgr/{all/msgr mon_election/connectivity rados supported-random-distro$/{rhel_8}} 1
fail 5819939 2021-01-23 14:54:40 2021-01-23 15:17:05 2021-01-23 16:39:06 1:22:01 1:00:33 0:21:28 smithi master ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/octopus backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/classic msgr-failures/osd-delay rados thrashers/careful thrashosds-health workloads/snaps-few-objects} 3
Failure Reason:

Command crashed: 'CEPH_CLIENT_ID=2 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 4000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op read 100 --op write 50 --op delete 50 --op snap_create 50 --op snap_remove 50 --op rollback 50 --op copy_from 50 --op write_excl 50 --pool unique_pool_0'

fail 5819940 2021-01-23 14:54:40 2021-01-23 15:18:32 2021-01-23 15:40:31 0:21:59 0:13:41 0:08:18 smithi master centos 8.2 rados/cephadm/workunits/{distro/centos_latest mon_election/connectivity task/test_cephadm} 1
Failure Reason:

Command failed (workunit test cephadm/test_cephadm.sh) on smithi132 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=58c3155eefac91c730f7b6fde0bfea039d6d8deb TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh'

pass 5819941 2021-01-23 14:54:41 2021-01-23 15:18:48 2021-01-23 15:40:47 0:21:59 0:11:57 0:10:02 smithi master ubuntu 18.04 rados/cephadm/smoke-roleless/{distro/ubuntu_18.04_podman start} 2
dead 5819942 2021-01-23 14:54:42 2021-01-23 15:19:32 2021-01-24 03:07:46 11:48:14 smithi master ubuntu 18.04 rados/cephadm/upgrade/{1-start 2-repo_digest/defaut 3-start-upgrade 4-wait distro$/{ubuntu_18.04} fixed-2 mon_election/connectivity} 2
fail 5819943 2021-01-23 14:54:43 2021-01-23 15:20:47 2021-01-23 15:44:46 0:23:59 0:13:10 0:10:49 smithi master ubuntu 18.04 rados/cephadm/smoke/{distro/ubuntu_18.04_podman fixed-2 mon_election/classic start} 2
Failure Reason:

Unterminated string starting at: line 1 column 253940 (char 253939)

pass 5819944 2021-01-23 14:54:43 2021-01-23 15:20:47 2021-01-23 15:48:47 0:28:00 0:19:00 0:09:00 smithi master centos 8.2 rados/cephadm/dashboard/{distro/centos_latest task/test_e2e} 2