User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
kchai | 2020-12-31 04:10:08 | 2020-12-31 04:11:37 | 2020-12-31 10:56:19 | 6:44:42 | rados | wip-kefu-testing-2020-12-30-1123 | smithi | d21180a | 12 | 28 | 13 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 5748607 | 2020-12-31 04:10:23 | 2020-12-31 04:11:37 | 2020-12-31 04:41:36 | 0:29:59 | 0:18:52 | 0:11:07 | smithi | master | centos | 8.0 | rados/cephadm/upgrade/{1-start 2-repo_digest/defaut 3-start-upgrade 4-wait distro$/{centos_8.0} fixed-2 mon_election/classic} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
dead | 5748608 | 2020-12-31 04:10:24 | 2020-12-31 04:11:37 | 2020-12-31 09:49:43 | 5:38:06 | 5:32:11 | 0:05:55 | smithi | master | ubuntu | 18.04 | rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/fastclose rados thrashers/morepggrow thrashosds-health workloads/test_rbd_api} | 3 | |
Failure Reason:
psutil.NoSuchProcess process no longer exists (pid=31590) |
||||||||||||||
fail | 5748609 | 2020-12-31 04:10:25 | 2020-12-31 04:12:10 | 2020-12-31 04:50:10 | 0:38:00 | 0:27:48 | 0:10:12 | smithi | master | ubuntu | 18.04 | rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/octopus backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/classic msgr-failures/few rados thrashers/none thrashosds-health workloads/cache-snaps} | 3 | |
Failure Reason:
Command crashed: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'sudo ceph osd pool create base 4'" |
||||||||||||||
pass | 5748610 | 2020-12-31 04:10:26 | 2020-12-31 04:12:33 | 2020-12-31 04:44:33 | 0:32:00 | 0:23:57 | 0:08:03 | smithi | master | rhel | 8.3 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-partial-recovery} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/crush-compat mon_election/classic msgr-failures/osd-delay msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{rhel_8} thrashers/mapgap thrashosds-health workloads/small-objects} | 2 | |
fail | 5748611 | 2020-12-31 04:10:27 | 2020-12-31 04:13:46 | 2020-12-31 05:39:47 | 1:26:01 | 1:10:17 | 0:15:44 | smithi | master | centos | 8.2 | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-lz4 rados tasks/mon_recovery validater/valgrind} | 2 | |
Failure Reason:
saw valgrind issues |
||||||||||||||
fail | 5748612 | 2020-12-31 04:10:27 | 2020-12-31 04:13:46 | 2020-12-31 04:41:46 | 0:28:00 | 0:18:06 | 0:09:54 | smithi | master | centos | 8.2 | rados/cephadm/upgrade/{1-start 2-repo_digest/repo_digest 3-start-upgrade 4-wait distro$/{centos_latest} fixed-2 mon_election/connectivity} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
fail | 5748613 | 2020-12-31 04:10:28 | 2020-12-31 04:14:27 | 2020-12-31 05:34:27 | 1:20:00 | 1:08:42 | 0:11:18 | smithi | master | centos | 8.2 | rados/valgrind-leaks/{1-start 2-inject-leak/osd centos_latest} | 1 | |
Failure Reason:
Command failed on smithi172 with status 1: 'sudo ceph --cluster ceph osd crush tunables default' |
||||||||||||||
fail | 5748614 | 2020-12-31 04:10:29 | 2020-12-31 04:14:32 | 2020-12-31 04:30:31 | 0:15:59 | 0:07:31 | 0:08:28 | smithi | master | ubuntu | 18.04 | rados/cephadm/workunits/{distro/ubuntu_18.04_podman mon_election/connectivity task/test_adoption} | 1 | |
Failure Reason:
Command failed (workunit test cephadm/test_adoption.sh) on smithi205 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=d21180a1e12f36e61aced96f8de2276edb1d0150 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_adoption.sh' |
||||||||||||||
pass | 5748615 | 2020-12-31 04:10:30 | 2020-12-31 04:15:39 | 2020-12-31 05:11:39 | 0:56:00 | 0:38:16 | 0:17:44 | smithi | master | ubuntu | 18.04 | rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/mimic-v1only backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/few rados thrashers/mapgap thrashosds-health workloads/snaps-few-objects} | 3 | |
fail | 5748616 | 2020-12-31 04:10:31 | 2020-12-31 04:18:25 | 2020-12-31 04:46:24 | 0:27:59 | 0:13:05 | 0:14:54 | smithi | master | centos | 8.2 | rados/singleton/{all/thrash_cache_writeback_proxy_none mon_election/connectivity msgr-failures/many msgr/async-v2only objectstore/filestore-xfs rados supported-random-distro$/{centos_8}} | 2 | |
Failure Reason:
Command crashed: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 400000 --objects 10000 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 600 --op read 100 --op write 50 --op delete 50 --op copy_from 50 --op write_excl 50 --pool base' |
||||||||||||||
pass | 5748617 | 2020-12-31 04:10:32 | 2020-12-31 04:18:25 | 2020-12-31 04:38:24 | 0:19:59 | 0:14:08 | 0:05:51 | smithi | master | rhel | 8.3 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{default} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/crush-compat mon_election/classic msgr-failures/osd-delay msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{rhel_8} thrashers/mapgap thrashosds-health workloads/cache} | 2 | |
pass | 5748618 | 2020-12-31 04:10:33 | 2020-12-31 04:19:03 | 2020-12-31 05:11:03 | 0:52:00 | 0:45:03 | 0:06:57 | smithi | master | rhel | 8.3 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-partial-recovery} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{rhel_8} thrashers/pggrow thrashosds-health workloads/radosbench-high-concurrency} | 2 | |
fail | 5748619 | 2020-12-31 04:10:34 | 2020-12-31 04:19:20 | 2020-12-31 04:43:19 | 0:23:59 | 0:17:16 | 0:06:43 | smithi | master | rhel | 8.3 | rados/cephadm/upgrade/{1-start 2-repo_digest/repo_digest 3-start-upgrade 4-wait distro$/{rhel_latest} fixed-2 mon_election/classic} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
dead | 5748620 | 2020-12-31 04:10:35 | 2020-12-31 04:20:38 | 2020-12-31 09:50:42 | 5:30:04 | smithi | master | ubuntu | 18.04 | rados/singleton-nomsgr/{all/msgr mon_election/classic rados supported-random-distro$/{ubuntu_latest}} | 1 | |||
fail | 5748621 | 2020-12-31 04:10:36 | 2020-12-31 04:20:38 | 2020-12-31 07:42:40 | 3:22:02 | 3:09:58 | 0:12:04 | smithi | master | rhel | 8.3 | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/fastclose objectstore/filestore-xfs rados recovery-overrides/{more-active-recovery} supported-random-distro$/{rhel_8} thrashers/mapgap thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 | |
Failure Reason:
Command failed on smithi161 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd dump --format=json' |
||||||||||||||
fail | 5748622 | 2020-12-31 04:10:37 | 2020-12-31 04:20:39 | 2020-12-31 05:48:37 | 1:27:58 | 1:12:56 | 0:15:02 | smithi | master | centos | 8.2 | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zlib rados tasks/rados_cls_all validater/valgrind} | 2 | |
Failure Reason:
saw valgrind issues |
||||||||||||||
fail | 5748623 | 2020-12-31 04:10:38 | 2020-12-31 04:23:08 | 2020-12-31 05:19:07 | 0:55:59 | 0:34:37 | 0:21:22 | smithi | master | ubuntu | 18.04 | rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/nautilus-v2only backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/classic msgr-failures/few rados thrashers/pggrow thrashosds-health workloads/radosbench} | 3 | |
Failure Reason:
Command failed on smithi084 with status 134: "/bin/sh -c 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage rados --no-log-to-stderr --name client.2 -b 65536 --object-size 65536 -p unique_pool_0 bench 90 write'" |
||||||||||||||
dead | 5748624 | 2020-12-31 04:10:39 | 2020-12-31 04:24:54 | 2020-12-31 09:50:59 | 5:26:05 | smithi | master | centos | 8.2 | rados/objectstore/{backends/objectstore supported-random-distro$/{centos_8}} | 1 | |||
dead | 5748625 | 2020-12-31 04:10:39 | 2020-12-31 04:27:26 | 2020-12-31 09:49:31 | 5:22:05 | smithi | master | ubuntu | 18.04 | rados/cephadm/upgrade/{1-start 2-repo_digest/defaut 3-start-upgrade 4-wait distro$/{ubuntu_18.04} fixed-2 mon_election/connectivity} | 2 | |||
pass | 5748626 | 2020-12-31 04:10:41 | 2020-12-31 04:27:33 | 2020-12-31 04:55:35 | 0:28:02 | 0:17:42 | 0:10:20 | smithi | master | centos | 8.2 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-partial-recovery} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/on mon_election/connectivity msgr-failures/few msgr/async objectstore/filestore-xfs rados supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/cache-agent-big} | 2 | |
pass | 5748627 | 2020-12-31 04:10:42 | 2020-12-31 04:27:34 | 2020-12-31 04:49:34 | 0:22:00 | 0:15:52 | 0:06:08 | smithi | master | rhel | 8.3 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-recovery} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/crush-compat mon_election/classic msgr-failures/osd-delay msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{rhel_8} thrashers/mapgap thrashosds-health workloads/cache-agent-small} | 2 | |
fail | 5748628 | 2020-12-31 04:10:43 | 2020-12-31 04:28:06 | 2020-12-31 05:52:09 | 1:24:03 | 1:15:37 | 0:08:26 | smithi | master | centos | 8.2 | rados/standalone/{mon_election/connectivity supported-random-distro$/{centos_8} workloads/scrub} | 1 | |
Failure Reason:
Command failed (workunit test scrub/osd-scrub-snaps.sh) on smithi136 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=d21180a1e12f36e61aced96f8de2276edb1d0150 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/scrub/osd-scrub-snaps.sh' |
||||||||||||||
fail | 5748629 | 2020-12-31 04:10:44 | 2020-12-31 04:28:12 | 2020-12-31 10:56:19 | 6:28:07 | 6:20:18 | 0:07:49 | smithi | master | ubuntu | 18.04 | rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/osd-delay rados thrashers/careful thrashosds-health workloads/rbd_cls} | — | |
Failure Reason:
psutil.NoSuchProcess process no longer exists (pid=18723) |
||||||||||||||
fail | 5748630 | 2020-12-31 04:10:45 | 2020-12-31 04:29:09 | 2020-12-31 05:27:09 | 0:58:00 | 0:38:05 | 0:19:55 | smithi | master | ubuntu | 18.04 | rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/octopus backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/classic msgr-failures/fastclose rados thrashers/default thrashosds-health workloads/snaps-few-objects} | 3 | |
Failure Reason:
Command crashed: 'CEPH_CLIENT_ID=2 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 4000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op read 100 --op write 50 --op delete 50 --op snap_create 50 --op snap_remove 50 --op rollback 50 --op copy_from 50 --op write_excl 50 --pool unique_pool_0' |
||||||||||||||
dead | 5748631 | 2020-12-31 04:10:47 | 2020-12-31 04:29:09 | 2020-12-31 09:51:14 | 5:22:05 | smithi | master | centos | 8.2 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-partial-recovery} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/crush-compat mon_election/classic msgr-failures/osd-delay msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_8} thrashers/mapgap thrashosds-health workloads/pool-snaps-few-objects} | 2 | |||
fail | 5748632 | 2020-12-31 04:10:48 | 2020-12-31 04:29:09 | 2020-12-31 04:41:08 | 0:11:59 | 0:05:55 | 0:06:04 | smithi | master | ubuntu | 18.04 | rados/standalone/{mon_election/classic supported-random-distro$/{ubuntu_latest} workloads/crush} | 1 | |
Failure Reason:
Command failed (workunit test crush/crush-choose-args.sh) on smithi097 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=d21180a1e12f36e61aced96f8de2276edb1d0150 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/crush/crush-choose-args.sh' |
||||||||||||||
fail | 5748633 | 2020-12-31 04:10:49 | 2020-12-31 04:29:57 | 2020-12-31 05:51:58 | 1:22:01 | 1:13:09 | 0:08:52 | smithi | master | centos | 8.2 | rados/valgrind-leaks/{1-start 2-inject-leak/mon centos_latest} | 1 | |
Failure Reason:
Command failed on smithi019 with status 1: 'sudo ceph --cluster ceph osd crush tunables default' |
||||||||||||||
fail | 5748634 | 2020-12-31 04:10:50 | 2020-12-31 04:31:07 | 2020-12-31 08:25:12 | 3:54:05 | 3:46:32 | 0:07:33 | smithi | master | ubuntu | 18.04 | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/fastclose objectstore/bluestore-hybrid rados recovery-overrides/{more-async-recovery} supported-random-distro$/{ubuntu_latest} thrashers/mapgap thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} | 3 | |
Failure Reason:
Command failed on smithi109 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd dump --format=json' |
||||||||||||||
dead | 5748635 | 2020-12-31 04:10:51 | 2020-12-31 04:31:08 | 2020-12-31 09:51:15 | 5:20:07 | 5:08:19 | 0:11:48 | smithi | master | centos | 8.2 | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-hybrid rados tasks/rados_api_tests validater/valgrind} | 2 | |
Failure Reason:
psutil.NoSuchProcess process no longer exists (pid=29135) |
||||||||||||||
pass | 5748636 | 2020-12-31 04:10:52 | 2020-12-31 04:31:08 | 2020-12-31 05:01:09 | 0:30:01 | 0:18:08 | 0:11:53 | smithi | master | centos | 8.2 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-partial-recovery} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8} thrashers/morepggrow thrashosds-health workloads/set-chunks-read} | 2 | |
fail | 5748637 | 2020-12-31 04:10:54 | 2020-12-31 04:31:07 | 2020-12-31 04:59:07 | 0:28:00 | 0:19:42 | 0:08:18 | smithi | master | rhel | 8.0 | rados/cephadm/upgrade/{1-start 2-repo_digest/defaut 3-start-upgrade 4-wait distro$/{rhel_8.0} fixed-2 mon_election/classic} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 5748638 | 2020-12-31 04:10:55 | 2020-12-31 04:32:02 | 2020-12-31 05:08:02 | 0:36:00 | 0:26:22 | 0:09:38 | smithi | master | centos | 8.2 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-active-recovery} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/crush-compat mon_election/classic msgr-failures/osd-delay msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{centos_8} thrashers/mapgap thrashosds-health workloads/snaps-few-objects-localized} | 2 | |
fail | 5748639 | 2020-12-31 04:10:56 | 2020-12-31 04:32:58 | 2020-12-31 04:46:57 | 0:13:59 | 0:07:22 | 0:06:37 | smithi | master | ubuntu | 18.04 | rados/cephadm/workunits/{distro/ubuntu_18.04_podman mon_election/connectivity task/test_cephadm} | 1 | |
Failure Reason:
Command failed (workunit test cephadm/test_cephadm.sh) on smithi006 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=d21180a1e12f36e61aced96f8de2276edb1d0150 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh' |
||||||||||||||
pass | 5748640 | 2020-12-31 04:10:56 | 2020-12-31 04:32:58 | 2020-12-31 05:02:57 | 0:29:59 | 0:20:49 | 0:09:10 | smithi | master | rhel | 8.3 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-partial-recovery} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/crush-compat mon_election/classic msgr-failures/osd-delay msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{rhel_8} thrashers/mapgap thrashosds-health workloads/cache-pool-snaps-readproxy} | 2 | |
dead | 5748641 | 2020-12-31 04:10:57 | 2020-12-31 04:32:58 | 2020-12-31 09:51:03 | 5:18:05 | smithi | master | ubuntu | 18.04 | rados/cephadm/upgrade/{1-start 2-repo_digest/repo_digest 3-start-upgrade 4-wait distro$/{ubuntu_18.04_podman} fixed-2 mon_election/connectivity} | 2 | |||
pass | 5748642 | 2020-12-31 04:10:58 | 2020-12-31 04:32:58 | 2020-12-31 05:54:58 | 1:22:00 | 1:15:04 | 0:06:56 | smithi | master | rhel | 8.3 | rados/dashboard/{clusters/{2-node-mgr} debug/mgr mon_election/connectivity objectstore/bluestore-comp-zlib supported-random-distro$/{rhel_8} tasks/dashboard} | 2 | |
dead | 5748643 | 2020-12-31 04:10:59 | 2020-12-31 04:33:45 | 2020-12-31 09:49:50 | 5:16:05 | smithi | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{default} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/crush-compat mon_election/classic msgr-failures/osd-delay msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{ubuntu_latest} thrashers/mapgap thrashosds-health workloads/rados_api_tests} | 2 | |||
fail | 5748644 | 2020-12-31 04:11:00 | 2020-12-31 04:35:16 | 2020-12-31 05:59:17 | 1:24:01 | 1:14:26 | 0:09:35 | smithi | master | centos | 8.2 | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-stupid rados tasks/mon_recovery validater/valgrind} | 2 | |
Failure Reason:
saw valgrind issues |
||||||||||||||
dead | 5748645 | 2020-12-31 04:11:01 | 2020-12-31 04:35:16 | 2020-12-31 09:49:22 | 5:14:06 | smithi | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-partial-recovery} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/crush-compat mon_election/classic msgr-failures/osd-delay msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_latest} thrashers/mapgap thrashosds-health workloads/set-chunks-read} | 2 | |||
pass | 5748646 | 2020-12-31 04:11:02 | 2020-12-31 04:35:27 | 2020-12-31 04:49:27 | 0:14:00 | 0:06:37 | 0:07:23 | smithi | master | ubuntu | 18.04 | rados/singleton-nomsgr/{all/lazy_omap_stats_output mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} | 1 | |
fail | 5748647 | 2020-12-31 04:11:03 | 2020-12-31 04:35:31 | 2020-12-31 05:03:31 | 0:28:00 | 0:19:24 | 0:08:36 | smithi | master | rhel | 8.0 | rados/cephadm/upgrade/{1-start 2-repo_digest/repo_digest 3-start-upgrade 4-wait distro$/{rhel_8.0} fixed-2 mon_election/classic} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 5748648 | 2020-12-31 04:11:04 | 2020-12-31 04:36:12 | 2020-12-31 05:10:12 | 0:34:00 | 0:26:12 | 0:07:48 | smithi | master | rhel | 8.3 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-recovery} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/crush-compat mon_election/classic msgr-failures/osd-delay msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{rhel_8} thrashers/mapgap thrashosds-health workloads/snaps-few-objects} | 2 | |
dead | 5748649 | 2020-12-31 04:11:05 | 2020-12-31 04:37:07 | 2020-12-31 09:51:13 | 5:14:06 | smithi | master | ubuntu | 18.04 | rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/few rados thrashers/mapgap thrashosds-health workloads/cache-snaps} | 3 | |||
dead | 5748650 | 2020-12-31 04:11:06 | 2020-12-31 04:37:45 | 2020-12-31 09:49:50 | 5:12:05 | smithi | master | ubuntu | 18.04 | rados/singleton-nomsgr/{all/msgr mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} | 1 | |||
fail | 5748651 | 2020-12-31 04:11:07 | 2020-12-31 04:37:49 | 2020-12-31 06:01:50 | 1:24:01 | 1:13:39 | 0:10:22 | smithi | master | centos | 8.2 | rados/valgrind-leaks/{1-start 2-inject-leak/none centos_latest} | 1 | |
Failure Reason:
saw valgrind issues |
||||||||||||||
fail | 5748652 | 2020-12-31 04:11:08 | 2020-12-31 04:38:52 | 2020-12-31 06:00:53 | 1:22:01 | 1:13:35 | 0:08:26 | smithi | master | ubuntu | 18.04 | rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/octopus backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/classic msgr-failures/osd-delay rados thrashers/morepggrow thrashosds-health workloads/radosbench} | 3 | |
Failure Reason:
Command failed on smithi169 with status 1: "/bin/sh -c 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage rados --no-log-to-stderr --name client.2 -b 65536 --object-size 65536 -p unique_pool_0 bench 90 write'" |
||||||||||||||
fail | 5748653 | 2020-12-31 04:11:09 | 2020-12-31 04:39:03 | 2020-12-31 05:09:03 | 0:30:00 | 0:19:29 | 0:10:31 | smithi | master | centos | 8.0 | rados/cephadm/upgrade/{1-start 2-repo_digest/defaut 3-start-upgrade 4-wait distro$/{centos_8.0} fixed-2 mon_election/connectivity} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
dead | 5748654 | 2020-12-31 04:11:10 | 2020-12-31 04:39:16 | 2020-12-31 09:49:21 | 5:10:05 | smithi | master | centos | 8.2 | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/fastclose objectstore/bluestore-hybrid rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{centos_8} thrashers/mapgap thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 | |||
fail | 5748655 | 2020-12-31 04:11:11 | 2020-12-31 04:40:20 | 2020-12-31 06:00:20 | 1:20:00 | 1:12:54 | 0:07:06 | smithi | master | ubuntu | 18.04 | rados/standalone/{mon_election/classic supported-random-distro$/{ubuntu_latest} workloads/scrub} | 1 | |
Failure Reason:
Command failed (workunit test scrub/osd-scrub-snaps.sh) on smithi199 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=d21180a1e12f36e61aced96f8de2276edb1d0150 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/scrub/osd-scrub-snaps.sh' |
||||||||||||||
dead | 5748656 | 2020-12-31 04:11:12 | 2020-12-31 04:40:20 | 2020-12-31 09:50:25 | 5:10:05 | smithi | master | centos | 8.2 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-active-recovery} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/crush-compat mon_election/classic msgr-failures/osd-delay msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{centos_8} thrashers/mapgap thrashosds-health workloads/radosbench-high-concurrency} | 2 | |||
fail | 5748657 | 2020-12-31 04:11:13 | 2020-12-31 04:40:20 | 2020-12-31 06:04:21 | 1:24:01 | 1:13:38 | 0:10:23 | smithi | master | centos | 8.2 | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-bitmap rados tasks/rados_api_tests validater/valgrind} | 2 | |
Failure Reason:
saw valgrind issues |
||||||||||||||
fail | 5748658 | 2020-12-31 04:11:14 | 2020-12-31 04:40:20 | 2020-12-31 04:54:19 | 0:13:59 | 0:07:24 | 0:06:35 | smithi | master | ubuntu | 18.04 | rados/cephadm/workunits/{distro/ubuntu_18.04_podman mon_election/classic task/test_cephadm} | 1 | |
Failure Reason:
Command failed (workunit test cephadm/test_cephadm.sh) on smithi114 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=d21180a1e12f36e61aced96f8de2276edb1d0150 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh' |
||||||||||||||
fail | 5748659 | 2020-12-31 04:11:14 | 2020-12-31 04:40:39 | 2020-12-31 04:52:39 | 0:12:00 | 0:05:54 | 0:06:06 | smithi | master | ubuntu | 18.04 | rados/standalone/{mon_election/connectivity supported-random-distro$/{ubuntu_latest} workloads/crush} | 1 | |
Failure Reason:
Command failed (workunit test crush/crush-choose-args.sh) on smithi041 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=d21180a1e12f36e61aced96f8de2276edb1d0150 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/crush/crush-choose-args.sh' |