Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 5659399 2020-11-27 02:28:35 2020-11-27 02:30:05 2020-11-27 02:52:04 0:21:59 0:15:13 0:06:46 smithi master rhel 8.3 rados/cephadm/upgrade/{1-start 2-repo_digest/defaut 3-start-upgrade 4-wait distro$/{rhel_latest} fixed-2 mon_election/classic} 2
Failure Reason:

reached maximum tries (180) after waiting for 180 seconds

pass 5659400 2020-11-27 02:28:36 2020-11-27 02:30:05 2020-11-27 06:08:09 3:38:04 3:29:33 0:08:31 smithi master centos 8.2 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-low-osd-mem-target rados tasks/rados_cls_all validater/valgrind} 2
fail 5659401 2020-11-27 02:28:37 2020-11-27 02:30:05 2020-11-27 05:54:08 3:24:03 3:16:52 0:07:11 smithi master centos 8.2 rados/monthrash/{ceph clusters/9-mons mon_election/classic msgr-failures/few msgr/async-v1only objectstore/filestore-xfs rados supported-random-distro$/{centos_8} thrashers/many workloads/rados_mon_workunits} 2
Failure Reason:

Command failed (workunit test mon/caps.sh) on smithi176 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=2abe33517660ea12e639748e378aec66dcd6ac85 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/mon/caps.sh'

fail 5659402 2020-11-27 02:28:37 2020-11-27 02:30:05 2020-11-27 02:42:04 0:11:59 0:02:11 0:09:48 smithi master ubuntu 18.04 rados/cephadm/upgrade/{1-start 2-repo_digest/repo_digest 3-start-upgrade 4-wait distro$/{ubuntu_18.04} fixed-2 mon_election/connectivity} 2
Failure Reason:

Command failed on smithi062 with status 5: 'sudo systemctl stop ceph-ddb1c782-3059-11eb-980d-001a4aab830c@mon.a'

fail 5659403 2020-11-27 02:28:38 2020-11-27 02:30:05 2020-11-27 05:56:09 3:26:04 3:15:26 0:10:38 smithi master ubuntu 18.04 rados/monthrash/{ceph clusters/3-mons mon_election/connectivity msgr-failures/mon-delay msgr/async-v2only objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_latest} thrashers/one workloads/rados_mon_workunits} 2
Failure Reason:

Command failed (workunit test mon/caps.sh) on smithi118 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=2abe33517660ea12e639748e378aec66dcd6ac85 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/mon/caps.sh'

fail 5659404 2020-11-27 02:28:39 2020-11-27 02:30:11 2020-11-27 02:44:10 0:13:59 0:07:18 0:06:41 smithi master ubuntu 18.04 rados/cephadm/workunits/{distro/ubuntu_18.04_podman mon_election/connectivity task/test_adoption} 1
Failure Reason:

Command failed (workunit test cephadm/test_adoption.sh) on smithi084 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=2abe33517660ea12e639748e378aec66dcd6ac85 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_adoption.sh'

dead 5659405 2020-11-27 02:28:40 2020-11-27 02:32:36 2020-11-27 14:35:07 12:02:31 smithi master centos 8.2 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/filestore-xfs rados tasks/rados_api_tests validater/valgrind} 2
fail 5659406 2020-11-27 02:28:41 2020-11-27 02:34:20 2020-11-27 02:46:20 0:12:00 0:06:15 0:05:45 smithi master centos 8.2 rados/cephadm/workunits/{distro/centos_latest mon_election/classic task/test_cephadm} 1
Failure Reason:

Command failed (workunit test cephadm/test_cephadm.sh) on smithi189 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=2abe33517660ea12e639748e378aec66dcd6ac85 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh'

fail 5659407 2020-11-27 02:28:42 2020-11-27 02:34:21 2020-11-27 02:44:20 0:09:59 0:02:08 0:07:51 smithi master ubuntu 18.04 rados/cephadm/upgrade/{1-start 2-repo_digest/defaut 3-start-upgrade 4-wait distro$/{ubuntu_18.04} fixed-2 mon_election/connectivity} 2
Failure Reason:

Command failed on smithi158 with status 5: 'sudo systemctl stop ceph-27e9f6e4-305a-11eb-980d-001a4aab830c@mon.a'

fail 5659408 2020-11-27 02:28:43 2020-11-27 02:34:21 2020-11-27 03:00:20 0:25:59 0:19:55 0:06:04 smithi master centos 8.2 rados/cephadm/dashboard/{distro/centos_latest task/test_e2e} 2
Failure Reason:

Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi063 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=2abe33517660ea12e639748e378aec66dcd6ac85 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'

fail 5659409 2020-11-27 02:28:43 2020-11-27 02:34:20 2020-11-27 03:00:20 0:26:00 0:18:30 0:07:30 smithi master rhel 8.0 rados/cephadm/upgrade/{1-start 2-repo_digest/defaut 3-start-upgrade 4-wait distro$/{rhel_8.0} fixed-2 mon_election/classic} 2
Failure Reason:

reached maximum tries (180) after waiting for 180 seconds

fail 5659410 2020-11-27 02:28:44 2020-11-27 02:34:27 2020-11-27 02:46:27 0:12:00 0:06:19 0:05:41 smithi master centos 8.2 rados/cephadm/workunits/{distro/centos_latest mon_election/classic task/test_adoption} 1
Failure Reason:

Command failed (workunit test cephadm/test_adoption.sh) on smithi005 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=2abe33517660ea12e639748e378aec66dcd6ac85 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_adoption.sh'

fail 5659411 2020-11-27 02:28:45 2020-11-27 02:36:18 2020-11-27 02:50:18 0:14:00 0:07:13 0:06:47 smithi master ubuntu 18.04 rados/cephadm/workunits/{distro/ubuntu_18.04_podman mon_election/connectivity task/test_cephadm} 1
Failure Reason:

Command failed (workunit test cephadm/test_cephadm.sh) on smithi032 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=2abe33517660ea12e639748e378aec66dcd6ac85 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh'

dead 5659412 2020-11-27 02:28:46 2020-11-27 02:36:20 2020-11-27 14:38:50 12:02:30 smithi master centos 8.2 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-zlib rados tasks/rados_api_tests validater/valgrind} 2
fail 5659413 2020-11-27 02:28:47 2020-11-27 02:36:19 2020-11-27 02:44:18 0:07:59 0:02:10 0:05:49 smithi master ubuntu 18.04 rados/cephadm/upgrade/{1-start 2-repo_digest/repo_digest 3-start-upgrade 4-wait distro$/{ubuntu_latest} fixed-2 mon_election/connectivity} 2
Failure Reason:

Command failed on smithi051 with status 5: 'sudo systemctl stop ceph-5f711cf0-305a-11eb-980d-001a4aab830c@mon.a'

pass 5659414 2020-11-27 02:28:48 2020-11-27 02:36:19 2020-11-27 02:54:19 0:18:00 0:12:13 0:05:47 smithi master ubuntu 18.04 rados/singleton/{all/rebuild-mondb mon_election/classic msgr-failures/many msgr/async-v2only objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_latest}} 1
fail 5659415 2020-11-27 02:28:49 2020-11-27 02:36:19 2020-11-27 03:10:18 0:33:59 0:28:17 0:05:42 smithi master centos 8.2 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-recovery} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/crush-compat mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/rados_api_tests} 2
Failure Reason:

"2020-11-27T02:56:19.066251+0000 osd.7 (osd.7) 149 : cluster [ERR] scrub 61.17 61:eec4e359:test-rados-api-smithi201-33130-74::foo:24 : size 0 != clone_size 10" in cluster log

fail 5659416 2020-11-27 02:28:50 2020-11-27 02:36:19 2020-11-27 05:58:22 3:22:03 3:15:57 0:06:06 smithi master ubuntu 18.04 rados/monthrash/{ceph clusters/3-mons mon_election/connectivity msgr-failures/mon-delay msgr/async-v2only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{ubuntu_latest} thrashers/sync-many workloads/rados_mon_workunits} 2
Failure Reason:

Command failed (workunit test mon/caps.sh) on smithi019 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=2abe33517660ea12e639748e378aec66dcd6ac85 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/mon/caps.sh'

fail 5659417 2020-11-27 02:28:51 2020-11-27 02:36:20 2020-11-27 02:54:19 0:17:59 0:10:40 0:07:19 smithi master ubuntu 18.04 rados/cephadm/workunits/{distro/ubuntu_18.04_podman mon_election/connectivity task/test_orch_cli} 1
Failure Reason:

Test failure: test_cluster_info (tasks.cephfs.test_nfs.TestNFS)

fail 5659418 2020-11-27 02:28:51 2020-11-27 02:36:19 2020-11-27 02:54:18 0:17:59 0:09:58 0:08:01 smithi master ubuntu 18.04 rados/singleton/{all/thrash_cache_writeback_proxy_none mon_election/classic msgr-failures/many msgr/async-v1only objectstore/filestore-xfs rados supported-random-distro$/{ubuntu_latest}} 2
Failure Reason:

Command crashed: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 400000 --objects 10000 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 600 --op read 100 --op write 50 --op delete 50 --op copy_from 50 --op write_excl 50 --pool base'

fail 5659419 2020-11-27 02:28:52 2020-11-27 02:37:10 2020-11-27 02:49:09 0:11:59 0:06:14 0:05:45 smithi master centos 8.2 rados/cephadm/workunits/{distro/centos_latest mon_election/connectivity task/test_adoption} 1
Failure Reason:

Command failed (workunit test cephadm/test_adoption.sh) on smithi136 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=2abe33517660ea12e639748e378aec66dcd6ac85 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_adoption.sh'

pass 5659420 2020-11-27 02:28:53 2020-11-27 02:38:47 2020-11-27 03:10:47 0:32:00 0:24:30 0:07:30 smithi master ubuntu 18.04 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-partial-recovery} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/on mon_election/classic msgr-failures/fastclose msgr/async-v1only objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/rados_api_tests} 2
fail 5659421 2020-11-27 02:28:54 2020-11-27 02:38:47 2020-11-27 02:52:46 0:13:59 0:07:08 0:06:51 smithi master ubuntu 18.04 rados/cephadm/workunits/{distro/ubuntu_18.04_podman mon_election/classic task/test_cephadm} 1
Failure Reason:

Command failed (workunit test cephadm/test_cephadm.sh) on smithi134 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=2abe33517660ea12e639748e378aec66dcd6ac85 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh'

pass 5659422 2020-11-27 02:28:55 2020-11-27 02:38:47 2020-11-27 07:50:53 5:12:06 5:04:25 0:07:41 smithi master ubuntu 18.04 rados/upgrade/nautilus-x-singleton/{0-cluster/{openstack start} 1-install/nautilus 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{rbd-cls rbd-import-export readwrite snaps-few-objects} 5-workload/{radosbench rbd_api} 6-finish-upgrade 7-octopus 8-workload/{rbd-python snaps-many-objects} bluestore-bitmap mon_election/connectivity thrashosds-health ubuntu_latest} 4
fail 5659423 2020-11-27 02:28:56 2020-11-27 02:38:49 2020-11-27 03:04:49 0:26:00 0:20:30 0:05:30 smithi master centos 8.2 rados/cephadm/dashboard/{distro/centos_latest task/test_e2e} 2
Failure Reason:

Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi055 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=2abe33517660ea12e639748e378aec66dcd6ac85 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'