User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
kchai | 2020-05-23 03:49:52 | 2020-05-23 03:50:35 | 2020-05-23 17:15:30 | 13:24:55 | rados | wip-kefu-testing-2020-05-23-0054 | smithi | b310224 | 222 | 88 | 2 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 5083873 | 2020-05-23 03:50:12 | 2020-05-23 03:50:35 | 2020-05-23 04:14:34 | 0:23:59 | 0:12:45 | 0:11:14 | smithi | master | ubuntu | 18.04 | rados/cephadm/smoke-roleless/{distro/ubuntu_latest.yaml start.yaml} | 2 | |
Failure Reason:
'/home/ubuntu/cephtest/archive/syslog/misc.log:2020-05-23T04:04:52.697959+00:00 smithi098 bash[10378]: debug 2020-05-23T04:04:52.695+0000 7f7a4d191700 -1 log_channel(cephadm) log [ERR] : cephadm exited with an error code: 1, stderr:INFO:cephadm:Deploy daemon alertmanager.smithi098 ... ' in syslog |
||||||||||||||
pass | 5083874 | 2020-05-23 03:50:13 | 2020-05-23 03:50:35 | 2020-05-23 04:10:34 | 0:19:59 | 0:12:18 | 0:07:41 | smithi | master | centos | 8.1 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/many.yaml msgr/async-v1only.yaml objectstore/bluestore-comp-lz4.yaml rados.yaml supported-random-distro$/{centos_8.yaml} tasks/readwrite.yaml} | 2 | |
pass | 5083875 | 2020-05-23 03:50:14 | 2020-05-23 03:50:35 | 2020-05-23 04:22:35 | 0:32:00 | 0:24:24 | 0:07:36 | smithi | master | rhel | 8.1 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-async-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/on.yaml msgr-failures/fastclose.yaml msgr/async-v1only.yaml objectstore/bluestore-comp-snappy.yaml rados.yaml supported-random-distro$/{rhel_8.yaml} thrashers/none.yaml thrashosds-health.yaml workloads/cache-pool-snaps-readproxy.yaml} | 2 | |
fail | 5083876 | 2020-05-23 03:50:15 | 2020-05-23 03:50:35 | 2020-05-23 04:10:35 | 0:20:00 | 0:07:35 | 0:12:25 | smithi | master | ubuntu | 18.04 | rados/cephadm/orchestrator_cli/{2-node-mgr.yaml orchestrator_cli.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 2 | |
Failure Reason:
Command failed on smithi124 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-mds -f --cluster ceph -i c' |
||||||||||||||
pass | 5083877 | 2020-05-23 03:50:16 | 2020-05-23 03:50:39 | 2020-05-23 04:26:39 | 0:36:00 | 0:23:15 | 0:12:45 | smithi | master | rhel | 8.1 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/fast.yaml msgr-failures/osd-delay.yaml rados.yaml recovery-overrides/{more-async-recovery.yaml} supported-random-distro$/{rhel_8.yaml} thrashers/minsize_recovery.yaml thrashosds-health.yaml workloads/ec-small-objects-fast-read-overwrites.yaml} | 2 | |
pass | 5083878 | 2020-05-23 03:50:17 | 2020-05-23 03:51:04 | 2020-05-23 04:09:04 | 0:18:00 | 0:08:01 | 0:09:59 | smithi | master | ubuntu | 18.04 | rados/singleton/{all/watch-notify-same-primary.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-comp-snappy.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |
fail | 5083879 | 2020-05-23 03:50:18 | 2020-05-23 03:52:35 | 2020-05-23 04:16:35 | 0:24:00 | 0:12:50 | 0:11:10 | smithi | master | centos | 7.6 | rados/cephadm/smoke/{distro/centos_7.yaml fixed-2.yaml start.yaml} | 2 | |
Failure Reason:
'/home/ubuntu/cephtest/archive/syslog/misc.log:2020-05-23T04:12:32.549429+00:00 smithi026 bash: debug 2020-05-23T04:12:32.547+0000 7f41816e7700 -1 log_channel(cephadm) log [ERR] : cephadm exited with an error code: 1, stderr:INFO:cephadm:Deploy daemon prometheus.a ... ' in syslog |
||||||||||||||
pass | 5083880 | 2020-05-23 03:50:19 | 2020-05-23 03:52:35 | 2020-05-23 04:30:35 | 0:38:00 | 0:11:13 | 0:26:47 | smithi | master | ubuntu | 18.04 | rados/thrash-erasure-code-big/{ceph.yaml cluster/{12-osds.yaml openstack.yaml} msgr-failures/fastclose.yaml objectstore/filestore-xfs.yaml rados.yaml recovery-overrides/{more-async-recovery.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/ec-rados-plugin=lrc-k=4-m=2-l=3.yaml} | 3 | |
pass | 5083881 | 2020-05-23 03:50:19 | 2020-05-23 03:52:35 | 2020-05-23 04:24:35 | 0:32:00 | 0:25:33 | 0:06:27 | smithi | master | centos | 8.1 | rados/verify/{centos_latest.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/none.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/filestore-xfs.yaml rados.yaml tasks/mon_recovery.yaml validater/valgrind.yaml} | 2 | |
fail | 5083882 | 2020-05-23 03:50:20 | 2020-05-23 03:52:35 | 2020-05-23 04:08:34 | 0:15:59 | 0:08:20 | 0:07:39 | smithi | master | centos | 8.1 | rados/mgr/{clusters/{2-node-mgr.yaml} debug/mgr.yaml objectstore/bluestore-hybrid.yaml supported-random-distro$/{centos_8.yaml} tasks/failover.yaml} | 2 | |
Failure Reason:
Command failed on smithi031 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-mds -f --cluster ceph -i a' |
||||||||||||||
pass | 5083883 | 2020-05-23 03:50:21 | 2020-05-23 03:52:35 | 2020-05-23 04:32:35 | 0:40:00 | 0:21:48 | 0:18:12 | smithi | master | centos | 7.6 | rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-install/nautilus.yaml backoff/peering.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/on.yaml distro$/{centos_7.6.yaml} msgr-failures/fastclose.yaml rados.yaml thrashers/mapgap.yaml thrashosds-health.yaml workloads/rbd_cls.yaml} | 3 | |
pass | 5083884 | 2020-05-23 03:50:22 | 2020-05-23 03:54:28 | 2020-05-23 04:30:28 | 0:36:00 | 0:26:08 | 0:09:52 | smithi | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-comp-zlib.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/cache-pool-snaps.yaml} | 2 | |
pass | 5083885 | 2020-05-23 03:50:23 | 2020-05-23 03:54:28 | 2020-05-23 04:12:28 | 0:18:00 | 0:08:52 | 0:09:08 | smithi | master | ubuntu | 18.04 | rados/singleton-nomsgr/{all/lazy_omap_stats_output.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |
fail | 5083886 | 2020-05-23 03:50:24 | 2020-05-23 03:54:28 | 2020-05-23 04:18:28 | 0:24:00 | 0:12:35 | 0:11:25 | smithi | master | centos | 7.6 | rados/cephadm/smoke-roleless/{distro/centos_7.yaml start.yaml} | 2 | |
Failure Reason:
'/home/ubuntu/cephtest/archive/syslog/misc.log:2020-05-23T04:11:00.192583+00:00 smithi073 bash: debug 2020-05-23T04:11:00.190+0000 7fb1a12de700 -1 log_channel(cephadm) log [ERR] : cephadm exited with an error code: 1, stderr:INFO:cephadm:Deploy daemon alertmanager.smithi073 ... ' in syslog |
||||||||||||||
pass | 5083887 | 2020-05-23 03:50:24 | 2020-05-23 03:54:32 | 2020-05-23 04:16:32 | 0:22:00 | 0:10:52 | 0:11:08 | smithi | master | ubuntu | 18.04 | rados/perf/{ceph.yaml objectstore/bluestore-bitmap.yaml openstack.yaml settings/optimized.yaml ubuntu_latest.yaml workloads/fio_4M_rand_read.yaml} | 1 | |
pass | 5083888 | 2020-05-23 03:50:25 | 2020-05-23 03:54:33 | 2020-05-23 05:00:33 | 1:06:00 | 0:58:29 | 0:07:31 | smithi | master | centos | 8.1 | rados/monthrash/{ceph.yaml clusters/9-mons.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-comp-lz4.yaml rados.yaml supported-random-distro$/{centos_8.yaml} thrashers/one.yaml workloads/rados_mon_osdmap_prune.yaml} | 2 | |
pass | 5083889 | 2020-05-23 03:50:26 | 2020-05-23 03:56:14 | 2020-05-23 04:32:14 | 0:36:00 | 0:26:45 | 0:09:15 | smithi | master | centos | 8.1 | rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/osd-delay.yaml objectstore/bluestore-comp-zstd.yaml rados.yaml recovery-overrides/{more-partial-recovery.yaml} supported-random-distro$/{centos_8.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} | 2 | |
pass | 5083890 | 2020-05-23 03:50:27 | 2020-05-23 03:56:16 | 2020-05-23 04:12:15 | 0:15:59 | 0:07:20 | 0:08:39 | smithi | master | ubuntu | 18.04 | rados/singleton/{all/admin-socket.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/bluestore-comp-zlib.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |
fail | 5083891 | 2020-05-23 03:50:28 | 2020-05-23 03:56:22 | 2020-05-23 04:10:21 | 0:13:59 | 0:03:11 | 0:10:48 | smithi | master | ubuntu | 18.04 | rados/cephadm/upgrade/{1-start.yaml 2-start-upgrade.yaml 3-wait.yaml distro$/{ubuntu_18.04.yaml} fixed-2.yaml} | 2 | |
Failure Reason:
Command failed on smithi148 with status 5: 'sudo systemctl stop ceph-345b7a62-9cab-11ea-a06a-001a4aab830c@mon.a' |
||||||||||||||
pass | 5083892 | 2020-05-23 03:50:29 | 2020-05-23 03:56:25 | 2020-05-23 04:52:25 | 0:56:00 | 0:12:06 | 0:43:54 | smithi | master | ubuntu | 18.04 | rados/thrash-erasure-code-shec/{ceph.yaml clusters/{fixed-4.yaml openstack.yaml} msgr-failures/fastclose.yaml objectstore/bluestore-hybrid.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/ec-rados-plugin=shec-k=4-m=3-c=2.yaml} | 4 | |
pass | 5083893 | 2020-05-23 03:50:29 | 2020-05-23 03:56:33 | 2020-05-23 05:26:34 | 1:30:01 | 1:22:13 | 0:07:48 | smithi | master | centos | 8.1 | rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/normal.yaml msgr-failures/osd-delay.yaml objectstore/bluestore-bitmap.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{centos_8.yaml} thrashers/minsize_recovery.yaml thrashosds-health.yaml workloads/ec-radosbench.yaml} | 2 | |
fail | 5083894 | 2020-05-23 03:50:30 | 2020-05-23 03:56:35 | 2020-05-23 04:16:35 | 0:20:00 | 0:07:43 | 0:12:17 | smithi | master | ubuntu | 18.04 | rados/dashboard/{clusters/{2-node-mgr.yaml} debug/mgr.yaml objectstore/bluestore-hybrid.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/dashboard.yaml} | 2 | |
Failure Reason:
Command failed on smithi141 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-mds -f --cluster ceph -i b' |
||||||||||||||
pass | 5083895 | 2020-05-23 03:50:31 | 2020-05-23 03:56:35 | 2020-05-23 04:12:35 | 0:16:00 | 0:07:53 | 0:08:07 | smithi | master | centos | 8.1 | rados/objectstore/{backends/fusestore.yaml supported-random-distro$/{centos_8.yaml}} | 1 | |
fail | 5083896 | 2020-05-23 03:50:32 | 2020-05-23 03:58:43 | 2020-05-23 04:16:42 | 0:17:59 | 0:12:53 | 0:05:06 | smithi | master | centos | 8.1 | rados/cephadm/workunits/{distro/centos_latest.yaml task/test_adoption.yaml} | 1 | |
Failure Reason:
Command failed (workunit test cephadm/test_adoption.sh) on smithi053 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=b3102246570f1a522971013abdaf7c83463c3d9a TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_adoption.sh' |
||||||||||||||
pass | 5083897 | 2020-05-23 03:50:33 | 2020-05-23 03:58:43 | 2020-05-23 04:32:43 | 0:34:00 | 0:28:26 | 0:05:34 | smithi | master | rhel | 8.1 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-async-recovery.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/on.yaml msgr-failures/osd-delay.yaml msgr/async.yaml objectstore/bluestore-comp-zstd.yaml rados.yaml supported-random-distro$/{rhel_8.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/cache-snaps-balanced.yaml} | 2 | |
pass | 5083898 | 2020-05-23 03:50:34 | 2020-05-23 03:58:43 | 2020-05-23 04:22:42 | 0:23:59 | 0:17:05 | 0:06:54 | smithi | master | centos | 8.1 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-comp-snappy.yaml rados.yaml supported-random-distro$/{centos_8.yaml} tasks/repair_test.yaml} | 2 | |
pass | 5083899 | 2020-05-23 03:50:35 | 2020-05-23 03:58:43 | 2020-05-23 04:16:42 | 0:17:59 | 0:12:03 | 0:05:56 | smithi | master | rhel | 8.1 | rados/singleton/{all/deduptool.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-comp-zstd.yaml rados.yaml supported-random-distro$/{rhel_8.yaml}} | 1 | |
pass | 5083900 | 2020-05-23 03:50:35 | 2020-05-23 04:00:15 | 2020-05-23 05:00:16 | 1:00:01 | 0:52:04 | 0:07:57 | smithi | master | rhel | 8.1 | rados/standalone/{supported-random-distro$/{rhel_8.yaml} workloads/mon.yaml} | 1 | |
fail | 5083901 | 2020-05-23 03:50:36 | 2020-05-23 04:00:19 | 2020-05-23 04:30:19 | 0:30:00 | 0:21:10 | 0:08:50 | smithi | master | centos | 8.0 | rados/cephadm/with-work/{distro/centos_8.0.yaml fixed-2.yaml mode/root.yaml msgr/async-v2only.yaml start.yaml tasks/rados_python.yaml} | 2 | |
Failure Reason:
'/home/ubuntu/cephtest/archive/syslog/misc.log:2020-05-23T04:23:07.001660+00:00 smithi001 bash[25469]: debug 2020-05-23T04:23:07.000+0000 7f45b7764700 -1 log_channel(cephadm) log [ERR] : cephadm exited with an error code: 1, stderr:INFO:cephadm:Deploy daemon prometheus.a ... ' in syslog |
||||||||||||||
pass | 5083902 | 2020-05-23 03:50:37 | 2020-05-23 04:00:32 | 2020-05-23 04:30:32 | 0:30:00 | 0:08:04 | 0:21:56 | smithi | master | centos | 8.1 | rados/multimon/{clusters/21.yaml msgr-failures/many.yaml msgr/async-v2only.yaml no_pools.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{centos_8.yaml} tasks/mon_clock_no_skews.yaml} | 3 | |
pass | 5083903 | 2020-05-23 03:50:38 | 2020-05-23 04:00:33 | 2020-05-23 04:24:33 | 0:24:00 | 0:13:33 | 0:10:27 | smithi | master | ubuntu | 18.04 | rados/perf/{ceph.yaml objectstore/bluestore-comp.yaml openstack.yaml settings/optimized.yaml ubuntu_latest.yaml workloads/fio_4M_rand_rw.yaml} | 1 | |
pass | 5083904 | 2020-05-23 03:50:39 | 2020-05-23 04:00:35 | 2020-05-23 04:38:35 | 0:38:00 | 0:31:07 | 0:06:53 | smithi | master | centos | 8.1 | rados/singleton-bluestore/{all/cephtool.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{centos_8.yaml}} | 1 | |
fail | 5083905 | 2020-05-23 03:50:40 | 2020-05-23 04:00:39 | 2020-05-23 04:18:39 | 0:18:00 | 0:09:07 | 0:08:53 | smithi | master | centos | 8.1 | rados/singleton-nomsgr/{all/librados_hello_world.yaml rados.yaml supported-random-distro$/{centos_8.yaml}} | 1 | |
Failure Reason:
Command failed on smithi195 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-mds -f --cluster ceph -i a' |
||||||||||||||
pass | 5083906 | 2020-05-23 03:50:40 | 2020-05-23 04:00:49 | 2020-05-23 04:24:49 | 0:24:00 | 0:12:36 | 0:11:24 | smithi | master | centos | 8.0 | rados/cephadm/smoke/{distro/centos_8.0.yaml fixed-2.yaml start.yaml} | 2 | |
pass | 5083907 | 2020-05-23 03:50:41 | 2020-05-23 04:02:39 | 2020-05-23 04:34:39 | 0:32:00 | 0:22:14 | 0:09:46 | smithi | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/fastclose.yaml msgr/async-v1only.yaml objectstore/bluestore-hybrid.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/cache-snaps.yaml} | 2 | |
pass | 5083908 | 2020-05-23 03:50:42 | 2020-05-23 04:02:39 | 2020-05-23 04:24:39 | 0:22:00 | 0:08:53 | 0:13:07 | smithi | master | ubuntu | 18.04 | rados/singleton/{all/divergent_priors.yaml msgr-failures/many.yaml msgr/async-v2only.yaml objectstore/bluestore-hybrid.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |
fail | 5083909 | 2020-05-23 03:50:43 | 2020-05-23 04:02:39 | 2020-05-23 05:02:40 | 1:00:01 | 0:52:37 | 0:07:24 | smithi | master | centos | 8.0 | rados/cephadm/smoke-roleless/{distro/centos_8.0.yaml start.yaml} | 2 | |
Failure Reason:
'/home/ubuntu/cephtest/archive/syslog/misc.log:2020-05-23T04:18:20.379807+00:00 smithi192 bash[22015]: debug 2020-05-23T04:18:20.379+0000 7fb257b64700 -1 log_channel(cephadm) log [ERR] : cephadm exited with an error code: 1, stderr:INFO:cephadm:Deploy daemon alertmanager.smithi192 ... ' in syslog |
||||||||||||||
fail | 5083910 | 2020-05-23 03:50:44 | 2020-05-23 04:02:41 | 2020-05-23 04:18:40 | 0:15:59 | 0:08:16 | 0:07:43 | smithi | master | centos | 8.1 | rados/mgr/{clusters/{2-node-mgr.yaml} debug/mgr.yaml objectstore/bluestore-low-osd-mem-target.yaml supported-random-distro$/{centos_8.yaml} tasks/insights.yaml} | 2 | |
Failure Reason:
Command failed on smithi067 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-mds -f --cluster ceph -i b' |
||||||||||||||
pass | 5083911 | 2020-05-23 03:50:45 | 2020-05-23 04:04:49 | 2020-05-23 04:32:49 | 0:28:00 | 0:11:29 | 0:16:31 | smithi | master | ubuntu | 18.04 | rados/thrash-erasure-code-big/{ceph.yaml cluster/{12-osds.yaml openstack.yaml} msgr-failures/fastclose.yaml objectstore/bluestore-bitmap.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/ec-rados-plugin=lrc-k=4-m=2-l=3.yaml} | 3 | |
pass | 5083912 | 2020-05-23 03:50:46 | 2020-05-23 04:04:49 | 2020-05-23 04:22:48 | 0:17:59 | 0:11:55 | 0:06:04 | smithi | master | centos | 8.1 | rados/verify/{centos_latest.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/none.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-bitmap.yaml rados.yaml tasks/mon_recovery.yaml validater/lockdep.yaml} | 2 | |
fail | 5083913 | 2020-05-23 03:50:47 | 2020-05-23 04:04:49 | 2020-05-23 04:40:49 | 0:36:00 | 0:28:57 | 0:07:03 | smithi | master | centos | 8.1 | rados/cephadm/with-work/{distro/centos_latest.yaml fixed-2.yaml mode/packaged.yaml msgr/async.yaml start.yaml tasks/rados_api_tests.yaml} | 2 | |
Failure Reason:
'/home/ubuntu/cephtest/archive/syslog/misc.log:2020-05-23T04:26:16.285624+00:00 smithi153 bash[25100]: debug 2020-05-23T04:26:16.284+0000 7fe7dda04700 -1 log_channel(cephadm) log [ERR] : cephadm exited with an error code: 1, stderr:INFO:cephadm:Deploy daemon prometheus.a ... ' in syslog |
||||||||||||||
pass | 5083914 | 2020-05-23 03:50:47 | 2020-05-23 04:06:40 | 2020-05-23 04:22:39 | 0:15:59 | 0:09:57 | 0:06:02 | smithi | master | centos | 8.1 | rados/singleton/{all/divergent_priors2.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{centos_8.yaml}} | 1 | |
pass | 5083915 | 2020-05-23 03:50:48 | 2020-05-23 04:06:40 | 2020-05-23 05:00:41 | 0:54:01 | 0:37:09 | 0:16:52 | smithi | master | centos | 7.6 | rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-install/octopus.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/crush-compat.yaml distro$/{centos_7.6.yaml} msgr-failures/few.yaml rados.yaml thrashers/morepggrow.yaml thrashosds-health.yaml workloads/snaps-few-objects.yaml} | 3 | |
pass | 5083916 | 2020-05-23 03:50:49 | 2020-05-23 04:06:42 | 2020-05-23 04:28:42 | 0:22:00 | 0:11:23 | 0:10:37 | smithi | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-partial-recovery.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/on.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/cache.yaml} | 2 | |
pass | 5083917 | 2020-05-23 03:50:50 | 2020-05-23 04:08:49 | 2020-05-23 04:46:49 | 0:38:00 | 0:27:37 | 0:10:23 | smithi | master | ubuntu | 18.04 | rados/monthrash/{ceph.yaml clusters/3-mons.yaml msgr-failures/mon-delay.yaml msgr/async.yaml objectstore/bluestore-comp-snappy.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/sync-many.yaml workloads/rados_mon_workunits.yaml} | 2 | |
pass | 5083918 | 2020-05-23 03:50:51 | 2020-05-23 04:08:49 | 2020-05-23 04:46:49 | 0:38:00 | 0:27:12 | 0:10:48 | smithi | master | ubuntu | 18.04 | rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/fastclose.yaml objectstore/bluestore-hybrid.yaml rados.yaml recovery-overrides/{more-async-recovery.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} | 2 | |
fail | 5083919 | 2020-05-23 03:50:52 | 2020-05-23 04:08:49 | 2020-05-23 04:24:49 | 0:16:00 | 0:07:50 | 0:08:10 | smithi | master | ubuntu | 18.04 | rados/cephadm/workunits/{distro/ubuntu_18.04_podman.yaml task/test_cephadm.yaml} | 1 | |
Failure Reason:
Command failed (workunit test cephadm/test_cephadm.sh) on smithi016 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=b3102246570f1a522971013abdaf7c83463c3d9a TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh' |
||||||||||||||
pass | 5083920 | 2020-05-23 03:50:53 | 2020-05-23 04:08:49 | 2020-05-23 04:28:49 | 0:20:00 | 0:11:11 | 0:08:49 | smithi | master | ubuntu | 18.04 | rados/perf/{ceph.yaml objectstore/bluestore-low-osd-mem-target.yaml openstack.yaml settings/optimized.yaml ubuntu_latest.yaml workloads/fio_4M_rand_write.yaml} | 1 | |
pass | 5083921 | 2020-05-23 03:50:54 | 2020-05-23 04:09:05 | 2020-05-23 04:43:05 | 0:34:00 | 0:23:10 | 0:10:50 | smithi | master | ubuntu | 18.04 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/normal.yaml msgr-failures/fastclose.yaml rados.yaml recovery-overrides/{more-async-partial-recovery.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/ec-small-objects-overwrites.yaml} | 2 | |
pass | 5083922 | 2020-05-23 03:50:55 | 2020-05-23 04:10:37 | 2020-05-23 07:42:42 | 3:32:05 | 0:14:56 | 3:17:09 | smithi | master | rhel | 8.1 | rados/thrash-erasure-code-shec/{ceph.yaml clusters/{fixed-4.yaml openstack.yaml} msgr-failures/few.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml recovery-overrides/{more-async-recovery.yaml} supported-random-distro$/{rhel_8.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/ec-rados-plugin=shec-k=4-m=3-c=2.yaml} | 4 | |
pass | 5083923 | 2020-05-23 03:50:55 | 2020-05-23 04:10:38 | 2020-05-23 04:36:37 | 0:25:59 | 0:17:17 | 0:08:42 | smithi | master | ubuntu | 18.04 | rados/singleton-nomsgr/{all/msgr.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |
pass | 5083924 | 2020-05-23 03:50:56 | 2020-05-23 04:10:37 | 2020-05-23 04:28:37 | 0:18:00 | 0:12:26 | 0:05:34 | smithi | master | centos | 8.1 | rados/cephadm/smoke/{distro/centos_latest.yaml fixed-2.yaml start.yaml} | 2 | |
pass | 5083925 | 2020-05-23 03:50:57 | 2020-05-23 04:10:38 | 2020-05-23 04:32:37 | 0:21:59 | 0:11:22 | 0:10:37 | smithi | master | ubuntu | 18.04 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/many.yaml msgr/async.yaml objectstore/bluestore-comp-zlib.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/scrub_test.yaml} | 2 | |
pass | 5083926 | 2020-05-23 03:50:58 | 2020-05-23 04:10:37 | 2020-05-23 04:42:37 | 0:32:00 | 0:21:29 | 0:10:31 | smithi | master | ubuntu | 18.04 | rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/fast.yaml msgr-failures/fastclose.yaml objectstore/bluestore-comp-lz4.yaml rados.yaml recovery-overrides/{more-async-recovery.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/ec-small-objects-balanced.yaml} | 2 | |
pass | 5083927 | 2020-05-23 03:50:59 | 2020-05-23 04:10:39 | 2020-05-23 04:28:38 | 0:17:59 | 0:09:01 | 0:08:58 | smithi | master | ubuntu | 18.04 | rados/singleton/{all/dump-stuck.yaml msgr-failures/many.yaml msgr/async-v1only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |
pass | 5083928 | 2020-05-23 03:51:00 | 2020-05-23 04:10:40 | 2020-05-23 04:36:39 | 0:25:59 | 0:17:45 | 0:08:14 | smithi | master | rhel | 8.1 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/osd-delay.yaml msgr/async.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{rhel_8.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/dedup_tier.yaml} | 2 | |
fail | 5083929 | 2020-05-23 03:51:01 | 2020-05-23 04:12:31 | 2020-05-23 04:36:31 | 0:24:00 | 0:18:01 | 0:05:59 | smithi | master | centos | 8.1 | rados/cephadm/smoke-roleless/{distro/centos_latest.yaml start.yaml} | 2 | |
Failure Reason:
'/home/ubuntu/cephtest/archive/syslog/misc.log:2020-05-23T04:26:56.595312+00:00 smithi191 bash[21766]: debug 2020-05-23T04:26:56.593+0000 7f8998269700 -1 log_channel(cephadm) log [ERR] : cephadm exited with an error code: 1, stderr:INFO:cephadm:Deploy daemon alertmanager.smithi191 ... ' in syslog |
||||||||||||||
fail | 5083930 | 2020-05-23 03:51:01 | 2020-05-23 04:12:31 | 2020-05-23 04:30:31 | 0:18:00 | 0:10:33 | 0:07:27 | smithi | master | rhel | 8.1 | rados/dashboard/{clusters/{2-node-mgr.yaml} debug/mgr.yaml objectstore/bluestore-low-osd-mem-target.yaml supported-random-distro$/{rhel_8.yaml} tasks/dashboard.yaml} | 2 | |
Failure Reason:
Command failed on smithi050 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-mds -f --cluster ceph -i c' |
||||||||||||||
pass | 5083931 | 2020-05-23 03:51:02 | 2020-05-23 04:12:31 | 2020-05-23 04:30:31 | 0:18:00 | 0:12:04 | 0:05:56 | smithi | master | centos | 8.1 | rados/objectstore/{backends/keyvaluedb.yaml supported-random-distro$/{centos_8.yaml}} | 1 | |
pass | 5083932 | 2020-05-23 03:51:03 | 2020-05-23 04:12:33 | 2020-05-23 04:38:32 | 0:25:59 | 0:19:35 | 0:06:24 | smithi | master | centos | 8.1 | rados/valgrind-leaks/{1-start.yaml 2-inject-leak/osd.yaml centos_latest.yaml} | 1 | |
pass | 5083933 | 2020-05-23 03:51:04 | 2020-05-23 04:12:33 | 2020-05-23 04:26:32 | 0:13:59 | 0:07:27 | 0:06:32 | smithi | master | centos | 8.1 | rados/multimon/{clusters/3.yaml msgr-failures/few.yaml msgr/async.yaml no_pools.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{centos_8.yaml} tasks/mon_clock_with_skews.yaml} | 2 | |
fail | 5083934 | 2020-05-23 03:51:05 | 2020-05-23 04:12:33 | 2020-05-23 04:44:33 | 0:32:00 | 0:24:35 | 0:07:25 | smithi | master | rhel | 8.0 | rados/cephadm/with-work/{distro/rhel_8.0.yaml fixed-2.yaml mode/root.yaml msgr/async-v1only.yaml start.yaml tasks/rados_python.yaml} | 2 | |
Failure Reason:
'/home/ubuntu/cephtest/archive/syslog/misc.log:2020-05-23T04:38:10.938876+00:00 smithi109 bash[30753]: debug 2020-05-23T04:38:10.937+0000 7fc1eacae700 -1 log_channel(cephadm) log [ERR] : cephadm exited with an error code: 1, stderr:INFO:cephadm:Deploy daemon prometheus.a ... ' in syslog |
||||||||||||||
pass | 5083935 | 2020-05-23 03:51:06 | 2020-05-23 04:12:36 | 2020-05-23 04:50:36 | 0:38:00 | 0:30:02 | 0:07:58 | smithi | master | ubuntu | 18.04 | rados/singleton/{all/ec-lost-unfound.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |
pass | 5083936 | 2020-05-23 03:51:07 | 2020-05-23 04:14:43 | 2020-05-23 04:34:43 | 0:20:00 | 0:12:02 | 0:07:58 | smithi | master | ubuntu | 18.04 | rados/perf/{ceph.yaml objectstore/bluestore-stupid.yaml openstack.yaml settings/optimized.yaml ubuntu_latest.yaml workloads/radosbench_4K_rand_read.yaml} | 1 | |
pass | 5083937 | 2020-05-23 03:51:08 | 2020-05-23 04:14:43 | 2020-05-23 04:38:43 | 0:24:00 | 0:16:37 | 0:07:23 | smithi | master | rhel | 7.7 | rados/cephadm/smoke/{distro/rhel_7.yaml fixed-2.yaml start.yaml} | 2 | |
pass | 5083938 | 2020-05-23 03:51:08 | 2020-05-23 04:14:43 | 2020-05-23 04:48:43 | 0:34:00 | 0:22:12 | 0:11:48 | smithi | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-async-partial-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/on.yaml msgr-failures/fastclose.yaml msgr/async-v1only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/none.yaml thrashosds-health.yaml workloads/pool-snaps-few-objects.yaml} | 2 | |
pass | 5083939 | 2020-05-23 03:51:10 | 2020-05-23 04:14:43 | 2020-05-23 04:48:43 | 0:34:00 | 0:25:15 | 0:08:45 | smithi | master | rhel | 8.1 | rados/singleton-nomsgr/{all/multi-backfill-reject.yaml rados.yaml supported-random-distro$/{rhel_8.yaml}} | 2 | |
fail | 5083940 | 2020-05-23 03:51:11 | 2020-05-23 04:14:43 | 2020-05-23 04:32:43 | 0:18:00 | 0:07:36 | 0:10:24 | smithi | master | ubuntu | 18.04 | rados/mgr/{clusters/{2-node-mgr.yaml} debug/mgr.yaml objectstore/bluestore-stupid.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/module_selftest.yaml} | 2 | |
Failure Reason:
Command failed on smithi141 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-mds -f --cluster ceph -i b' |
||||||||||||||
pass | 5083941 | 2020-05-23 03:51:12 | 2020-05-23 04:14:43 | 2020-05-23 04:30:43 | 0:16:00 | 0:07:00 | 0:09:00 | smithi | master | ubuntu | 18.04 | rados/singleton/{all/erasure-code-nonregression.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |
fail | 5083942 | 2020-05-23 03:51:13 | 2020-05-23 04:16:47 | 2020-05-23 04:48:47 | 0:32:00 | 0:24:21 | 0:07:39 | smithi | master | rhel | 7.7 | rados/cephadm/smoke-roleless/{distro/rhel_7.yaml start.yaml} | 2 | |
Failure Reason:
'/home/ubuntu/cephtest/archive/syslog/misc.log:2020-05-23T04:35:49.102740+00:00 smithi019 bash: debug 2020-05-23T04:35:49.101+0000 7f00c6e59700 -1 log_channel(cephadm) log [ERR] : cephadm exited with an error code: 1, stderr:INFO:cephadm:Deploy daemon alertmanager.smithi019 ... ' in syslog |
||||||||||||||
pass | 5083943 | 2020-05-23 03:51:14 | 2020-05-23 04:16:47 | 2020-05-23 04:52:47 | 0:36:00 | 0:25:12 | 0:10:48 | smithi | master | ubuntu | 18.04 | rados/thrash-erasure-code-big/{ceph.yaml cluster/{12-osds.yaml openstack.yaml} msgr-failures/few.yaml objectstore/bluestore-comp-lz4.yaml rados.yaml recovery-overrides/{more-async-recovery.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/ec-rados-plugin=jerasure-k=4-m=2.yaml} | 3 | |
fail | 5083944 | 2020-05-23 03:51:14 | 2020-05-23 04:16:47 | 2020-05-23 11:52:58 | 7:36:11 | 7:28:58 | 0:07:13 | smithi | master | centos | 8.1 | rados/verify/{centos_latest.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/default/{default.yaml thrashosds-health.yaml} msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-comp-lz4.yaml rados.yaml tasks/rados_api_tests.yaml validater/valgrind.yaml} | 2 | |
Failure Reason:
Command failed (workunit test rados/test.sh) on smithi076 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=b3102246570f1a522971013abdaf7c83463c3d9a TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 ALLOW_TIMEOUTS=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh' |
||||||||||||||
pass | 5083945 | 2020-05-23 03:51:15 | 2020-05-23 04:16:47 | 2020-05-23 06:50:50 | 2:34:03 | 2:26:47 | 0:07:16 | smithi | master | centos | 8.1 | rados/standalone/{supported-random-distro$/{centos_8.yaml} workloads/osd.yaml} | 1 | |
pass | 5083946 | 2020-05-23 03:51:16 | 2020-05-23 04:16:47 | 2020-05-23 04:28:47 | 0:12:00 | 0:04:39 | 0:07:21 | smithi | master | centos | 8.1 | rados/cephadm/workunits/{distro/centos_latest.yaml task/test_cephadm_repos.yaml} | 1 | |
pass | 5083947 | 2020-05-23 03:51:17 | 2020-05-23 04:16:47 | 2020-05-23 04:48:47 | 0:32:00 | 0:25:39 | 0:06:21 | smithi | master | rhel | 8.1 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-comp-zstd.yaml rados.yaml supported-random-distro$/{rhel_8.yaml} tasks/rados_api_tests.yaml} | 2 | |
pass | 5083948 | 2020-05-23 03:51:18 | 2020-05-23 04:16:47 | 2020-05-23 04:54:47 | 0:38:00 | 0:27:56 | 0:10:04 | smithi | master | ubuntu | 18.04 | rados/monthrash/{ceph.yaml clusters/9-mons.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-comp-zlib.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/sync.yaml workloads/snaps-few-objects.yaml} | 2 | |
pass | 5083949 | 2020-05-23 03:51:19 | 2020-05-23 04:18:39 | 2020-05-23 04:52:39 | 0:34:00 | 0:26:06 | 0:07:54 | smithi | master | centos | 8.1 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-async-recovery.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{centos_8.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/rados_api_tests.yaml} | 2 | |
pass | 5083950 | 2020-05-23 03:51:20 | 2020-05-23 04:18:39 | 2020-05-23 04:52:39 | 0:34:00 | 0:24:53 | 0:09:07 | smithi | master | rhel | 8.1 | rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/few.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml recovery-overrides/{more-async-partial-recovery.yaml} supported-random-distro$/{rhel_8.yaml} thrashers/none.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} | 2 | |
pass | 5083951 | 2020-05-23 03:51:21 | 2020-05-23 04:18:39 | 2020-05-23 05:46:40 | 1:28:01 | 0:12:03 | 1:15:58 | smithi | master | ubuntu | 18.04 | rados/thrash-erasure-code-shec/{ceph.yaml clusters/{fixed-4.yaml openstack.yaml} msgr-failures/osd-delay.yaml objectstore/bluestore-stupid.yaml rados.yaml recovery-overrides/{more-partial-recovery.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/ec-rados-plugin=shec-k=4-m=3-c=2.yaml} | 4 | |
fail | 5083952 | 2020-05-23 03:51:22 | 2020-05-23 04:18:39 | 2020-05-23 04:56:39 | 0:38:00 | 0:22:36 | 0:15:24 | smithi | master | centos | 7.6 | rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-install/luminous-v1only.yaml backoff/normal.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/on.yaml distro$/{centos_7.6.yaml} msgr-failures/osd-delay.yaml rados.yaml thrashers/none.yaml thrashosds-health.yaml workloads/test_rbd_api.yaml} | 3 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 5083953 | 2020-05-23 03:51:23 | 2020-05-23 04:18:39 | 2020-05-23 04:58:39 | 0:40:00 | 0:31:01 | 0:08:59 | smithi | master | rhel | 8.1 | rados/singleton/{all/lost-unfound-delete.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-comp-lz4.yaml rados.yaml supported-random-distro$/{rhel_8.yaml}} | 1 | |
fail | 5083954 | 2020-05-23 03:51:24 | 2020-05-23 04:18:40 | 2020-05-23 05:00:39 | 0:41:59 | 0:31:24 | 0:10:35 | smithi | master | rhel | 8.1 | rados/cephadm/with-work/{distro/rhel_latest.yaml fixed-2.yaml mode/packaged.yaml msgr/async-v2only.yaml start.yaml tasks/rados_api_tests.yaml} | 2 | |
Failure Reason:
'/home/ubuntu/cephtest/archive/syslog/misc.log:2020-05-23T04:44:28.125144+00:00 smithi040 bash[25763]: debug 2020-05-23T04:44:28.124+0000 7f80798aa700 -1 log_channel(cephadm) log [ERR] : cephadm exited with an error code: 1, stderr:INFO:cephadm:Deploy daemon prometheus.a ... ' in syslog |
||||||||||||||
pass | 5083955 | 2020-05-23 03:51:25 | 2020-05-23 04:18:40 | 2020-05-23 04:40:40 | 0:22:00 | 0:10:36 | 0:11:24 | smithi | master | ubuntu | 18.04 | rados/perf/{ceph.yaml objectstore/bluestore-basic-min-osd-mem-target.yaml openstack.yaml settings/optimized.yaml ubuntu_latest.yaml workloads/radosbench_4K_seq_read.yaml} | 1 | |
pass | 5083956 | 2020-05-23 03:51:25 | 2020-05-23 04:18:41 | 2020-05-23 04:58:41 | 0:40:00 | 0:29:29 | 0:10:31 | smithi | master | rhel | 8.1 | rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/normal.yaml msgr-failures/few.yaml objectstore/bluestore-comp-snappy.yaml rados.yaml recovery-overrides/{more-partial-recovery.yaml} supported-random-distro$/{rhel_8.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/ec-small-objects-fast-read.yaml} | 2 | |
pass | 5083957 | 2020-05-23 03:51:26 | 2020-05-23 04:22:46 | 2020-05-23 04:46:45 | 0:23:59 | 0:15:35 | 0:08:24 | smithi | master | rhel | 8.0 | rados/cephadm/smoke/{distro/rhel_8.0.yaml fixed-2.yaml start.yaml} | 2 | |
pass | 5083958 | 2020-05-23 03:51:27 | 2020-05-23 04:22:46 | 2020-05-23 04:50:46 | 0:28:00 | 0:20:42 | 0:07:18 | smithi | master | rhel | 8.1 | rados/singleton-nomsgr/{all/osd_stale_reads.yaml rados.yaml supported-random-distro$/{rhel_8.yaml}} | 1 | |
pass | 5083959 | 2020-05-23 03:51:28 | 2020-05-23 04:22:46 | 2020-05-23 04:52:46 | 0:30:00 | 0:24:10 | 0:05:50 | smithi | master | centos | 8.1 | rados/singleton/{all/lost-unfound.yaml msgr-failures/many.yaml msgr/async-v2only.yaml objectstore/bluestore-comp-snappy.yaml rados.yaml supported-random-distro$/{centos_8.yaml}} | 1 | |
pass | 5083960 | 2020-05-23 03:51:29 | 2020-05-23 04:22:46 | 2020-05-23 05:14:46 | 0:52:00 | 0:41:48 | 0:10:12 | smithi | master | centos | 8.1 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-partial-recovery.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/on.yaml msgr-failures/osd-delay.yaml msgr/async.yaml objectstore/bluestore-comp-lz4.yaml rados.yaml supported-random-distro$/{centos_8.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/radosbench-high-concurrency.yaml} | 2 | |
fail | 5083961 | 2020-05-23 03:51:30 | 2020-05-23 04:22:46 | 2020-05-23 05:06:47 | 0:44:01 | 0:34:18 | 0:09:43 | smithi | master | rhel | 8.0 | rados/cephadm/smoke-roleless/{distro/rhel_8.0.yaml start.yaml} | 2 | |
Failure Reason:
timeout expired in wait_until_healthy |
||||||||||||||
pass | 5083962 | 2020-05-23 03:51:31 | 2020-05-23 04:22:46 | 2020-05-23 04:46:46 | 0:24:00 | 0:13:40 | 0:10:20 | smithi | master | rhel | 8.1 | rados/multimon/{clusters/6.yaml msgr-failures/many.yaml msgr/async-v1only.yaml no_pools.yaml objectstore/bluestore-comp-lz4.yaml rados.yaml supported-random-distro$/{rhel_8.yaml} tasks/mon_recovery.yaml} | 2 | |
fail | 5083963 | 2020-05-23 03:51:32 | 2020-05-23 04:22:50 | 2020-05-23 04:42:49 | 0:19:59 | 0:07:24 | 0:12:35 | smithi | master | ubuntu | 18.04 | rados/dashboard/{clusters/{2-node-mgr.yaml} debug/mgr.yaml objectstore/bluestore-stupid.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/dashboard.yaml} | 2 | |
Failure Reason:
Command failed on smithi146 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-mds -f --cluster ceph -i c' |
||||||||||||||
pass | 5083964 | 2020-05-23 03:51:33 | 2020-05-23 04:24:48 | 2020-05-23 04:56:48 | 0:32:00 | 0:26:54 | 0:05:06 | smithi | master | rhel | 8.1 | rados/objectstore/{backends/objectcacher-stress.yaml supported-random-distro$/{rhel_8.yaml}} | 1 | |
pass | 5083965 | 2020-05-23 03:51:33 | 2020-05-23 04:24:48 | 2020-05-23 05:06:49 | 0:42:01 | 0:27:16 | 0:14:45 | smithi | master | ubuntu | 18.04 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/fast.yaml msgr-failures/few.yaml rados.yaml recovery-overrides/{more-async-partial-recovery.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/ec-snaps-few-objects-overwrites.yaml} | 2 | |
fail | 5083966 | 2020-05-23 03:51:34 | 2020-05-23 04:24:48 | 2020-05-23 04:44:48 | 0:20:00 | 0:07:42 | 0:12:18 | smithi | master | ubuntu | 18.04 | rados/cephadm/workunits/{distro/ubuntu_18.04_podman.yaml task/test_adoption.yaml} | 1 | |
Failure Reason:
Command failed (workunit test cephadm/test_adoption.sh) on smithi130 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=b3102246570f1a522971013abdaf7c83463c3d9a TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_adoption.sh' |
||||||||||||||
pass | 5083967 | 2020-05-23 03:51:35 | 2020-05-23 04:24:50 | 2020-05-23 04:44:49 | 0:19:59 | 0:10:11 | 0:09:48 | smithi | master | rhel | 8.1 | rados/singleton/{all/max-pg-per-osd.from-mon.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-comp-zlib.yaml rados.yaml supported-random-distro$/{rhel_8.yaml}} | 1 | |
pass | 5083968 | 2020-05-23 03:51:36 | 2020-05-23 04:24:50 | 2020-05-23 04:48:50 | 0:24:00 | 0:11:36 | 0:12:24 | smithi | master | ubuntu | 18.04 | rados/perf/{ceph.yaml objectstore/bluestore-bitmap.yaml openstack.yaml settings/optimized.yaml ubuntu_latest.yaml workloads/radosbench_4M_rand_read.yaml} | 1 | |
pass | 5083969 | 2020-05-23 03:51:37 | 2020-05-23 04:26:48 | 2020-05-23 04:46:48 | 0:20:00 | 0:13:28 | 0:06:32 | smithi | master | rhel | 8.1 | rados/cephadm/smoke/{distro/rhel_latest.yaml fixed-2.yaml start.yaml} | 2 | |
pass | 5083970 | 2020-05-23 03:51:38 | 2020-05-23 04:26:48 | 2020-05-23 06:08:50 | 1:42:02 | 1:35:06 | 0:06:56 | smithi | master | centos | 8.1 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-async-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/fastclose.yaml msgr/async-v1only.yaml objectstore/bluestore-comp-snappy.yaml rados.yaml supported-random-distro$/{centos_8.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/radosbench.yaml} | 2 | |
fail | 5083971 | 2020-05-23 03:51:39 | 2020-05-23 04:28:55 | 2020-05-23 04:48:54 | 0:19:59 | 0:07:26 | 0:12:33 | smithi | master | ubuntu | 18.04 | rados/mgr/{clusters/{2-node-mgr.yaml} debug/mgr.yaml objectstore/filestore-xfs.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/progress.yaml} | 2 | |
Failure Reason:
Command failed on smithi165 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-mds -f --cluster ceph -i b' |
||||||||||||||
pass | 5083972 | 2020-05-23 03:51:39 | 2020-05-23 04:28:55 | 2020-05-23 05:08:54 | 0:39:59 | 0:32:12 | 0:07:47 | smithi | master | rhel | 8.1 | rados/singleton-bluestore/{all/cephtool.yaml msgr-failures/many.yaml msgr/async-v1only.yaml objectstore/bluestore-comp-lz4.yaml rados.yaml supported-random-distro$/{rhel_8.yaml}} | 1 | |
pass | 5083973 | 2020-05-23 03:51:40 | 2020-05-23 04:28:55 | 2020-05-23 05:08:55 | 0:40:00 | 0:14:04 | 0:25:56 | smithi | master | rhel | 8.1 | rados/thrash-erasure-code-big/{ceph.yaml cluster/{12-osds.yaml openstack.yaml} msgr-failures/osd-delay.yaml objectstore/bluestore-comp-snappy.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{rhel_8.yaml} thrashers/fastread.yaml thrashosds-health.yaml workloads/ec-rados-plugin=lrc-k=4-m=2-l=3.yaml} | 3 | |
pass | 5083974 | 2020-05-23 03:51:41 | 2020-05-23 04:28:55 | 2020-05-23 04:54:54 | 0:25:59 | 0:15:46 | 0:10:13 | smithi | master | centos | 8.1 | rados/verify/{centos_latest.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/none.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-comp-snappy.yaml rados.yaml tasks/rados_cls_all.yaml validater/lockdep.yaml} | 2 | |
pass | 5083975 | 2020-05-23 03:51:42 | 2020-05-23 04:28:55 | 2020-05-23 05:00:54 | 0:31:59 | 0:17:35 | 0:14:24 | smithi | master | ubuntu | 18.04 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/many.yaml msgr/async-v2only.yaml objectstore/bluestore-hybrid.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/rados_cls_all.yaml} | 2 | |
fail | 5083976 | 2020-05-23 03:51:43 | 2020-05-23 04:28:55 | 2020-05-23 05:24:55 | 0:56:00 | 0:40:44 | 0:15:16 | smithi | master | rhel | 8.1 | rados/cephadm/smoke-roleless/{distro/rhel_latest.yaml start.yaml} | 2 | |
Failure Reason:
'/home/ubuntu/cephtest/archive/syslog/misc.log:2020-05-23T04:53:00.989664+00:00 smithi006 bash[21995]: debug 2020-05-23T04:53:00.988+0000 7fbc370be700 -1 log_channel(cephadm) log [ERR] : cephadm exited with an error code: 1, stderr:INFO:cephadm:Deploy daemon alertmanager.smithi006 ... ' in syslog |
||||||||||||||
pass | 5083977 | 2020-05-23 03:51:44 | 2020-05-23 04:28:54 | 2020-05-23 04:42:54 | 0:14:00 | 0:07:54 | 0:06:06 | smithi | master | centos | 8.1 | rados/singleton-nomsgr/{all/pool-access.yaml rados.yaml supported-random-distro$/{centos_8.yaml}} | 1 | |
pass | 5083978 | 2020-05-23 03:51:45 | 2020-05-23 04:30:35 | 2020-05-23 04:48:34 | 0:17:59 | 0:12:29 | 0:05:30 | smithi | master | centos | 8.1 | rados/singleton/{all/max-pg-per-osd.from-primary.yaml msgr-failures/many.yaml msgr/async-v1only.yaml objectstore/bluestore-comp-zstd.yaml rados.yaml supported-random-distro$/{centos_8.yaml}} | 1 | |
pass | 5083979 | 2020-05-23 03:51:46 | 2020-05-23 04:30:35 | 2020-05-23 05:14:35 | 0:44:00 | 0:11:38 | 0:32:22 | smithi | master | ubuntu | 18.04 | rados/thrash-erasure-code-shec/{ceph.yaml clusters/{fixed-4.yaml openstack.yaml} msgr-failures/fastclose.yaml objectstore/filestore-xfs.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/ec-rados-plugin=shec-k=4-m=3-c=2.yaml} | 4 | |
fail | 5083980 | 2020-05-23 03:51:46 | 2020-05-23 04:30:35 | 2020-05-23 05:04:35 | 0:34:00 | 0:22:03 | 0:11:57 | smithi | master | ubuntu | 18.04 | rados/cephadm/with-work/{distro/ubuntu_18.04.yaml fixed-2.yaml mode/root.yaml msgr/async.yaml start.yaml tasks/rados_python.yaml} | 2 | |
Failure Reason:
'/home/ubuntu/cephtest/archive/syslog/misc.log:2020-05-23T04:53:15.612176+00:00 smithi059 bash[13389]: debug 2020-05-23T04:53:15.608+0000 7faff41c2700 -1 log_channel(cephadm) log [ERR] : cephadm exited with an error code: 1, stderr:INFO:cephadm:Deploy daemon prometheus.a ... ' in syslog |
||||||||||||||
pass | 5083981 | 2020-05-23 03:51:47 | 2020-05-23 04:30:35 | 2020-05-23 05:06:35 | 0:36:00 | 0:26:32 | 0:09:28 | smithi | master | centos | 8.1 | rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/osd-delay.yaml objectstore/bluestore-stupid.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{centos_8.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} | 2 | |
pass | 5083982 | 2020-05-23 03:51:48 | 2020-05-23 04:30:35 | 2020-05-23 04:56:34 | 0:25:59 | 0:17:54 | 0:08:05 | smithi | master | rhel | 8.1 | rados/monthrash/{ceph.yaml clusters/3-mons.yaml msgr-failures/mon-delay.yaml msgr/async-v2only.yaml objectstore/bluestore-comp-zstd.yaml rados.yaml supported-random-distro$/{rhel_8.yaml} thrashers/force-sync-many.yaml workloads/pool-create-delete.yaml} | 2 | |
pass | 5083983 | 2020-05-23 03:51:49 | 2020-05-23 04:30:36 | 2020-05-23 04:54:36 | 0:24:00 | 0:17:01 | 0:06:59 | smithi | master | rhel | 8.1 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/on.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-comp-zlib.yaml rados.yaml supported-random-distro$/{rhel_8.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/redirect.yaml} | 2 | |
pass | 5083984 | 2020-05-23 03:51:50 | 2020-05-23 04:30:44 | 2020-05-23 05:18:44 | 0:48:00 | 0:34:20 | 0:13:40 | smithi | master | centos | 7.6 | rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-install/luminous.yaml backoff/peering.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/crush-compat.yaml distro$/{centos_7.6.yaml} msgr-failures/fastclose.yaml rados.yaml thrashers/pggrow.yaml thrashosds-health.yaml workloads/cache-snaps.yaml} | 3 | |
fail | 5083985 | 2020-05-23 03:51:51 | 2020-05-23 04:32:31 | 2020-05-23 04:58:30 | 0:25:59 | 0:12:31 | 0:13:28 | smithi | master | ubuntu | 18.04 | rados/cephadm/smoke/{distro/ubuntu_18.04.yaml fixed-2.yaml start.yaml} | 2 | |
Failure Reason:
'/home/ubuntu/cephtest/archive/syslog/misc.log:2020-05-23T04:53:59.900892+00:00 smithi138 bash[10397]: debug 2020-05-23T04:53:59.897+0000 7fdfeb41f700 -1 log_channel(cephadm) log [ERR] : cephadm exited with an error code: 1, stderr:INFO:cephadm:Deploy daemon prometheus.a ... ' in syslog |
||||||||||||||
pass | 5083986 | 2020-05-23 03:51:52 | 2020-05-23 04:32:36 | 2020-05-23 04:52:36 | 0:20:00 | 0:11:58 | 0:08:02 | smithi | master | ubuntu | 18.04 | rados/perf/{ceph.yaml objectstore/bluestore-comp.yaml openstack.yaml settings/optimized.yaml ubuntu_latest.yaml workloads/radosbench_4M_seq_read.yaml} | 1 | |
pass | 5083987 | 2020-05-23 03:51:53 | 2020-05-23 04:32:38 | 2020-05-23 04:56:38 | 0:24:00 | 0:14:42 | 0:09:18 | smithi | master | rhel | 8.1 | rados/singleton/{all/max-pg-per-osd.from-replica.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-hybrid.yaml rados.yaml supported-random-distro$/{rhel_8.yaml}} | 1 | |
pass | 5083988 | 2020-05-23 03:51:54 | 2020-05-23 04:32:44 | 2020-05-23 05:06:44 | 0:34:00 | 0:21:42 | 0:12:18 | smithi | master | ubuntu | 18.04 | rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/fast.yaml msgr-failures/osd-delay.yaml objectstore/bluestore-comp-zlib.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/ec-small-objects-many-deletes.yaml} | 2 | |
pass | 5083989 | 2020-05-23 03:51:55 | 2020-05-23 04:32:44 | 2020-05-23 05:52:45 | 1:20:01 | 1:12:47 | 0:07:14 | smithi | master | centos | 8.1 | rados/standalone/{supported-random-distro$/{centos_8.yaml} workloads/scrub.yaml} | 1 | |
fail | 5083990 | 2020-05-23 03:51:55 | 2020-05-23 04:32:50 | 2020-05-23 04:58:50 | 0:26:00 | 0:12:50 | 0:13:10 | smithi | master | ubuntu | 18.04 | rados/cephadm/smoke-roleless/{distro/ubuntu_18.04.yaml start.yaml} | 2 | |
Failure Reason:
'/home/ubuntu/cephtest/archive/syslog/misc.log:2020-05-23T04:49:39.924033+00:00 smithi062 bash[10515]: debug 2020-05-23T04:49:39.917+0000 7f303b7d4700 -1 log_channel(cephadm) log [ERR] : cephadm exited with an error code: 1, stderr:INFO:cephadm:Deploy daemon alertmanager.smithi062 ... ' in syslog |
||||||||||||||
pass | 5083991 | 2020-05-23 03:51:56 | 2020-05-23 04:34:54 | 2020-05-23 04:56:54 | 0:22:00 | 0:14:58 | 0:07:02 | smithi | master | centos | 8.1 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-async-partial-recovery.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/osd-delay.yaml msgr/async.yaml objectstore/bluestore-comp-zstd.yaml rados.yaml supported-random-distro$/{centos_8.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/redirect_promote_tests.yaml} | 2 | |
pass | 5083992 | 2020-05-23 03:51:57 | 2020-05-23 04:34:54 | 2020-05-23 04:58:54 | 0:24:00 | 0:08:08 | 0:15:52 | smithi | master | ubuntu | 18.04 | rados/multimon/{clusters/9.yaml msgr-failures/few.yaml msgr/async-v2only.yaml no_pools.yaml objectstore/bluestore-comp-snappy.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/mon_clock_no_skews.yaml} | 3 | |
fail | 5083993 | 2020-05-23 03:51:58 | 2020-05-23 04:36:46 | 2020-05-23 04:54:46 | 0:18:00 | 0:11:25 | 0:06:35 | smithi | master | centos | 8.1 | rados/cephadm/workunits/{distro/centos_latest.yaml task/test_cephadm.yaml} | 1 | |
Failure Reason:
Command failed (workunit test cephadm/test_cephadm.sh) on smithi110 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=b3102246570f1a522971013abdaf7c83463c3d9a TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh' |
||||||||||||||
pass | 5083994 | 2020-05-23 03:51:59 | 2020-05-23 04:36:47 | 2020-05-23 04:58:46 | 0:21:59 | 0:08:56 | 0:13:03 | smithi | master | ubuntu | 18.04 | rados/singleton/{all/mon-auth-caps.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |
pass | 5083995 | 2020-05-23 03:52:00 | 2020-05-23 04:36:47 | 2020-05-23 05:14:47 | 0:38:00 | 0:28:32 | 0:09:28 | smithi | master | ubuntu | 18.04 | rados/singleton-nomsgr/{all/recovery-unfound-found.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |
fail | 5083996 | 2020-05-23 03:52:01 | 2020-05-23 04:36:47 | 2020-05-23 04:52:46 | 0:15:59 | 0:07:35 | 0:08:24 | smithi | master | centos | 8.1 | rados/dashboard/{clusters/{2-node-mgr.yaml} debug/mgr.yaml objectstore/filestore-xfs.yaml supported-random-distro$/{centos_8.yaml} tasks/dashboard.yaml} | 2 | |
Failure Reason:
Command failed on smithi007 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-mds -f --cluster ceph -i b' |
||||||||||||||
fail | 5083997 | 2020-05-23 03:52:02 | 2020-05-23 04:38:47 | 2020-05-23 09:18:54 | 4:40:07 | 4:19:52 | 0:20:15 | smithi | master | rhel | 8.1 | rados/objectstore/{backends/objectstore.yaml supported-random-distro$/{rhel_8.yaml}} | 1 | |
Failure Reason:
Command crashed: 'sudo TESTDIR=/home/ubuntu/cephtest bash -c \'mkdir $TESTDIR/archive/ostest && cd $TESTDIR/archive/ostest && ulimit -Sn 16384 && CEPH_ARGS="--no-log-to-stderr --log-file $TESTDIR/archive/ceph_test_objectstore.log --debug-filestore 20 --debug-bluestore 20" ceph_test_objectstore --gtest_filter=-*/3 --gtest_catch_exceptions=0\'' |
||||||||||||||
fail | 5083998 | 2020-05-23 03:52:03 | 2020-05-23 04:38:47 | 2020-05-23 05:22:48 | 0:44:01 | 0:31:12 | 0:12:49 | smithi | master | ubuntu | 18.04 | rados/cephadm/with-work/{distro/ubuntu_18.04_podman.yaml fixed-2.yaml mode/packaged.yaml msgr/async-v1only.yaml start.yaml tasks/rados_api_tests.yaml} | 2 | |
Failure Reason:
'/home/ubuntu/cephtest/archive/syslog/misc.log:2020-05-23T05:02:21.674380+00:00 smithi196 bash[18131]: debug 2020-05-23T05:02:21.670+0000 7f276ead0700 -1 log_channel(cephadm) log [ERR] : cephadm exited with an error code: 1, stderr:INFO:cephadm:Deploy daemon prometheus.a ... ' in syslog |
||||||||||||||
pass | 5083999 | 2020-05-23 03:52:03 | 2020-05-23 04:38:47 | 2020-05-23 05:06:47 | 0:28:00 | 0:14:20 | 0:13:40 | smithi | master | ubuntu | 18.04 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/rados_python.yaml} | 2 | |
fail | 5084000 | 2020-05-23 03:52:04 | 2020-05-23 04:38:47 | 2020-05-23 04:58:47 | 0:20:00 | 0:10:05 | 0:09:55 | smithi | master | rhel | 8.1 | rados/mgr/{clusters/{2-node-mgr.yaml} debug/mgr.yaml objectstore/bluestore-bitmap.yaml supported-random-distro$/{rhel_8.yaml} tasks/prometheus.yaml} | 2 | |
Failure Reason:
Command failed on smithi080 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-mds -f --cluster ceph -i c' |
||||||||||||||
pass | 5084001 | 2020-05-23 03:52:05 | 2020-05-23 04:38:47 | 2020-05-23 05:02:47 | 0:24:00 | 0:11:30 | 0:12:30 | smithi | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-partial-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/on.yaml msgr-failures/fastclose.yaml msgr/async-v1only.yaml objectstore/bluestore-hybrid.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/none.yaml thrashosds-health.yaml workloads/redirect_set_object.yaml} | 2 | |
pass | 5084002 | 2020-05-23 03:52:06 | 2020-05-23 04:40:51 | 2020-05-23 05:00:51 | 0:20:00 | 0:10:07 | 0:09:53 | smithi | master | ubuntu | 18.04 | rados/perf/{ceph.yaml objectstore/bluestore-low-osd-mem-target.yaml openstack.yaml settings/optimized.yaml ubuntu_latest.yaml workloads/radosbench_4M_write.yaml} | 1 | |
fail | 5084003 | 2020-05-23 03:52:07 | 2020-05-23 04:40:52 | 2020-05-23 05:06:51 | 0:25:59 | 0:12:53 | 0:13:06 | smithi | master | ubuntu | 18.04 | rados/cephadm/smoke/{distro/ubuntu_18.04_podman.yaml fixed-2.yaml start.yaml} | 2 | |
Failure Reason:
'/home/ubuntu/cephtest/archive/syslog/misc.log:2020-05-23T05:02:34.993001+00:00 smithi098 bash[14843]: debug 2020-05-23T05:02:34.984+0000 7f4fe43f9700 -1 log_channel(cephadm) log [ERR] : cephadm exited with an error code: 1, stderr:INFO:cephadm:Deploy daemon prometheus.a ... ' in syslog |
||||||||||||||
pass | 5084004 | 2020-05-23 03:52:08 | 2020-05-23 04:40:51 | 2020-05-23 04:58:51 | 0:18:00 | 0:09:35 | 0:08:25 | smithi | master | ubuntu | 18.04 | rados/singleton/{all/mon-config-key-caps.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |
pass | 5084005 | 2020-05-23 03:52:09 | 2020-05-23 04:40:51 | 2020-05-23 05:22:52 | 0:42:01 | 0:27:36 | 0:14:25 | smithi | master | ubuntu | 18.04 | rados/thrash-erasure-code-big/{ceph.yaml cluster/{12-osds.yaml openstack.yaml} msgr-failures/fastclose.yaml objectstore/bluestore-comp-zlib.yaml rados.yaml recovery-overrides/{more-partial-recovery.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/ec-rados-plugin=jerasure-k=4-m=2.yaml} | 3 | |
pass | 5084006 | 2020-05-23 03:52:09 | 2020-05-23 04:42:53 | 2020-05-23 05:18:53 | 0:36:00 | 0:27:06 | 0:08:54 | smithi | master | centos | 8.1 | rados/verify/{centos_latest.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/default/{default.yaml thrashosds-health.yaml} msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-comp-zlib.yaml rados.yaml tasks/mon_recovery.yaml validater/valgrind.yaml} | 2 | |
pass | 5084007 | 2020-05-23 03:52:10 | 2020-05-23 04:42:53 | 2020-05-23 05:20:53 | 0:38:00 | 0:28:10 | 0:09:50 | smithi | master | centos | 8.1 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/normal.yaml msgr-failures/osd-delay.yaml rados.yaml recovery-overrides/{more-partial-recovery.yaml} supported-random-distro$/{centos_8.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/ec-snaps-few-objects-overwrites.yaml} | 2 | |
pass | 5084008 | 2020-05-23 03:52:11 | 2020-05-23 04:42:53 | 2020-05-23 06:04:55 | 1:22:02 | 0:12:38 | 1:09:24 | smithi | master | centos | 8.1 | rados/thrash-erasure-code-shec/{ceph.yaml clusters/{fixed-4.yaml openstack.yaml} msgr-failures/fastclose.yaml objectstore/bluestore-bitmap.yaml rados.yaml recovery-overrides/{more-async-recovery.yaml} supported-random-distro$/{centos_8.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/ec-rados-plugin=shec-k=4-m=3-c=2.yaml} | 4 | |
fail | 5084009 | 2020-05-23 03:52:12 | 2020-05-23 04:42:55 | 2020-05-23 05:08:54 | 0:25:59 | 0:12:12 | 0:13:47 | smithi | master | ubuntu | 18.04 | rados/cephadm/smoke-roleless/{distro/ubuntu_18.04_podman.yaml start.yaml} | 2 | |
Failure Reason:
'/home/ubuntu/cephtest/archive/syslog/misc.log:2020-05-23T05:01:17.132579+00:00 smithi133 bash[13341]: debug 2020-05-23T05:01:17.127+0000 7f10d6a05700 -1 log_channel(cephadm) log [ERR] : cephadm exited with an error code: 1, stderr:INFO:cephadm:Deploy daemon alertmanager.smithi133 ... ' in syslog |
||||||||||||||
pass | 5084010 | 2020-05-23 03:52:13 | 2020-05-23 04:43:06 | 2020-05-23 05:19:06 | 0:36:00 | 0:25:47 | 0:10:13 | smithi | master | centos | 8.1 | rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/fastclose.yaml objectstore/filestore-xfs.yaml rados.yaml recovery-overrides/{more-async-partial-recovery.yaml} supported-random-distro$/{centos_8.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} | 2 | |
pass | 5084011 | 2020-05-23 03:52:14 | 2020-05-23 04:44:48 | 2020-05-23 05:02:48 | 0:18:00 | 0:09:12 | 0:08:48 | smithi | master | ubuntu | 18.04 | rados/monthrash/{ceph.yaml clusters/9-mons.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-hybrid.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/many.yaml workloads/rados_5925.yaml} | 2 | |
pass | 5084012 | 2020-05-23 03:52:15 | 2020-05-23 04:44:49 | 2020-05-23 05:08:49 | 0:24:00 | 0:16:41 | 0:07:19 | smithi | master | rhel | 8.1 | rados/singleton/{all/mon-config-keys.yaml msgr-failures/many.yaml msgr/async-v2only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{rhel_8.yaml}} | 1 | |
pass | 5084013 | 2020-05-23 03:52:15 | 2020-05-23 04:44:51 | 2020-05-23 04:58:50 | 0:13:59 | 0:04:34 | 0:09:25 | smithi | master | ubuntu | 18.04 | rados/cephadm/workunits/{distro/ubuntu_18.04_podman.yaml task/test_cephadm_repos.yaml} | 1 | |
pass | 5084014 | 2020-05-23 03:52:16 | 2020-05-23 04:47:02 | 2020-05-23 05:05:01 | 0:17:59 | 0:11:52 | 0:06:07 | smithi | master | centos | 8.1 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-async-recovery.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{centos_8.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/set-chunk-promote-flush.yaml} | 2 | |
fail | 5084015 | 2020-05-23 03:52:17 | 2020-05-23 04:47:02 | 2020-05-23 05:01:01 | 0:13:59 | 0:07:48 | 0:06:11 | smithi | master | centos | 8.1 | rados/singleton-nomsgr/{all/version-number-sanity.yaml rados.yaml supported-random-distro$/{centos_8.yaml}} | 1 | |
Failure Reason:
Command failed on smithi165 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-mds -f --cluster ceph -i a' |
||||||||||||||
pass | 5084016 | 2020-05-23 03:52:18 | 2020-05-23 04:47:02 | 2020-05-23 07:05:05 | 2:18:03 | 1:54:56 | 0:23:07 | smithi | master | centos | 7.6 | rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-install/mimic-v1only.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/on.yaml distro$/{centos_7.6.yaml} msgr-failures/few.yaml rados.yaml thrashers/careful.yaml thrashosds-health.yaml workloads/radosbench.yaml} | 3 | |
pass | 5084017 | 2020-05-23 03:52:19 | 2020-05-23 04:47:02 | 2020-05-23 05:19:02 | 0:32:00 | 0:21:33 | 0:10:27 | smithi | master | ubuntu | 18.04 | rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/normal.yaml msgr-failures/fastclose.yaml objectstore/bluestore-comp-zstd.yaml rados.yaml recovery-overrides/{more-async-recovery.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/ec-small-objects.yaml} | 2 | |
fail | 5084018 | 2020-05-23 03:52:20 | 2020-05-23 04:47:02 | 2020-05-23 05:23:01 | 0:35:59 | 0:28:51 | 0:07:08 | smithi | master | centos | 8.0 | rados/cephadm/with-work/{distro/centos_8.0.yaml fixed-2.yaml mode/packaged.yaml msgr/async-v1only.yaml start.yaml tasks/rados_api_tests.yaml} | 2 | |
Failure Reason:
'/home/ubuntu/cephtest/archive/syslog/misc.log:2020-05-23T05:07:39.725523+00:00 smithi073 bash[25426]: debug 2020-05-23T05:07:39.724+0000 7f438de4e700 -1 log_channel(cephadm) log [ERR] : cephadm exited with an error code: 1, stderr:INFO:cephadm:Deploy daemon prometheus.a ... ' in syslog |
||||||||||||||
pass | 5084019 | 2020-05-23 03:52:21 | 2020-05-23 04:48:49 | 2020-05-23 05:20:49 | 0:32:00 | 0:22:40 | 0:09:20 | smithi | master | ubuntu | 18.04 | rados/perf/{ceph.yaml objectstore/bluestore-stupid.yaml openstack.yaml settings/optimized.yaml ubuntu_latest.yaml workloads/radosbench_omap_write.yaml} | 1 | |
pass | 5084020 | 2020-05-23 03:52:22 | 2020-05-23 04:48:49 | 2020-05-23 05:06:48 | 0:17:59 | 0:11:28 | 0:06:31 | smithi | master | rhel | 8.1 | rados/singleton/{all/mon-config.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{rhel_8.yaml}} | 1 | |
fail | 5084021 | 2020-05-23 03:52:23 | 2020-05-23 04:48:49 | 2020-05-23 05:14:49 | 0:26:00 | 0:12:49 | 0:13:11 | smithi | master | ubuntu | 18.04 | rados/cephadm/smoke/{distro/ubuntu_latest.yaml fixed-2.yaml start.yaml} | 2 | |
Failure Reason:
'/home/ubuntu/cephtest/archive/syslog/misc.log:2020-05-23T05:11:11.934524+00:00 smithi033 bash[10514]: debug 2020-05-23T05:11:11.929+0000 7f927e6a4700 -1 log_channel(cephadm) log [ERR] : cephadm exited with an error code: 1, stderr:INFO:cephadm:Deploy daemon prometheus.a ... ' in syslog |
||||||||||||||
pass | 5084022 | 2020-05-23 03:52:24 | 2020-05-23 04:48:49 | 2020-05-23 05:06:49 | 0:18:00 | 0:07:55 | 0:10:05 | smithi | master | centos | 8.1 | rados/multimon/{clusters/21.yaml msgr-failures/many.yaml msgr/async.yaml no_pools.yaml objectstore/bluestore-comp-zlib.yaml rados.yaml supported-random-distro$/{centos_8.yaml} tasks/mon_clock_with_skews.yaml} | 3 | |
dead | 5084023 | 2020-05-23 03:52:24 | 2020-05-23 04:48:49 | 2020-05-23 16:51:22 | 12:02:33 | smithi | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-async-recovery.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/on.yaml msgr-failures/osd-delay.yaml msgr/async.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/set-chunks-read.yaml} | 2 | |||
pass | 5084024 | 2020-05-23 03:52:25 | 2020-05-23 04:48:51 | 2020-05-23 05:16:51 | 0:28:00 | 0:15:57 | 0:12:03 | smithi | master | centos | 8.1 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/many.yaml msgr/async-v1only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{centos_8.yaml} tasks/rados_stress_watch.yaml} | 2 | |
fail | 5084025 | 2020-05-23 03:52:26 | 2020-05-23 04:48:55 | 2020-05-23 05:12:54 | 0:23:59 | 0:13:15 | 0:10:44 | smithi | master | ubuntu | 18.04 | rados/cephadm/smoke-roleless/{distro/ubuntu_latest.yaml start.yaml} | 2 | |
Failure Reason:
'/home/ubuntu/cephtest/archive/syslog/misc.log:2020-05-23T05:04:17.705113+00:00 smithi042 bash[10370]: debug 2020-05-23T05:04:17.699+0000 7f9dbaf19700 -1 log_channel(cephadm) log [ERR] : cephadm exited with an error code: 1, stderr:INFO:cephadm:Deploy daemon alertmanager.smithi042 ... ' in syslog |
||||||||||||||
pass | 5084026 | 2020-05-23 03:52:27 | 2020-05-23 04:50:25 | 2020-05-23 05:12:24 | 0:21:59 | 0:16:23 | 0:05:36 | smithi | master | centos | 8.1 | rados/singleton/{all/osd-backfill.yaml msgr-failures/many.yaml msgr/async-v1only.yaml objectstore/bluestore-comp-lz4.yaml rados.yaml supported-random-distro$/{centos_8.yaml}} | 1 | |
fail | 5084027 | 2020-05-23 03:52:28 | 2020-05-23 04:50:37 | 2020-05-23 05:10:37 | 0:20:00 | 0:10:24 | 0:09:36 | smithi | master | rhel | 8.1 | rados/mgr/{clusters/{2-node-mgr.yaml} debug/mgr.yaml objectstore/bluestore-comp-lz4.yaml supported-random-distro$/{rhel_8.yaml} tasks/workunits.yaml} | 2 | |
Failure Reason:
Command failed on smithi002 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-mds -f --cluster ceph -i c' |
||||||||||||||
fail | 5084028 | 2020-05-23 03:52:29 | 2020-05-23 04:50:47 | 2020-05-23 05:18:47 | 0:28:00 | 0:19:53 | 0:08:07 | smithi | master | centos | 8.1 | rados/cephadm/with-work/{distro/centos_latest.yaml fixed-2.yaml mode/root.yaml msgr/async-v2only.yaml start.yaml tasks/rados_python.yaml} | 2 | |
Failure Reason:
'/home/ubuntu/cephtest/archive/syslog/misc.log:2020-05-23T05:12:16.864692+00:00 smithi162 bash[25110]: debug 2020-05-23T05:12:16.863+0000 7f6bee4a2700 -1 log_channel(cephadm) log [ERR] : cephadm exited with an error code: 1, stderr:INFO:cephadm:Deploy daemon prometheus.a ... ' in syslog |
||||||||||||||
fail | 5084029 | 2020-05-23 03:52:30 | 2020-05-23 04:52:41 | 2020-05-23 05:12:41 | 0:20:00 | 0:10:21 | 0:09:39 | smithi | master | rhel | 8.1 | rados/dashboard/{clusters/{2-node-mgr.yaml} debug/mgr.yaml objectstore/bluestore-bitmap.yaml supported-random-distro$/{rhel_8.yaml} tasks/dashboard.yaml} | 2 | |
Failure Reason:
Command failed on smithi188 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-mds -f --cluster ceph -i b' |
||||||||||||||
pass | 5084030 | 2020-05-23 03:52:30 | 2020-05-23 04:52:42 | 2020-05-23 05:12:41 | 0:19:59 | 0:11:12 | 0:08:47 | smithi | master | rhel | 8.1 | rados/objectstore/{backends/alloc-hint.yaml supported-random-distro$/{rhel_8.yaml}} | 1 | |
fail | 5084031 | 2020-05-23 03:52:31 | 2020-05-23 04:52:41 | 2020-05-23 05:10:41 | 0:18:00 | 0:08:36 | 0:09:24 | smithi | master | ubuntu | 18.04 | rados/rest/{mgr-restful.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |
Failure Reason:
Command failed on smithi110 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-mds -f --cluster ceph -i a' |
||||||||||||||
fail | 5084032 | 2020-05-23 03:52:32 | 2020-05-23 04:52:42 | 2020-05-23 05:12:41 | 0:19:59 | 0:07:56 | 0:12:03 | smithi | master | ubuntu | 18.04 | rados/singleton-nomsgr/{all/admin_socket_output.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |
Failure Reason:
Command failed on smithi156 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-mds -f --cluster ceph -i a' |
||||||||||||||
pass | 5084033 | 2020-05-23 03:52:33 | 2020-05-23 04:52:47 | 2020-05-23 05:10:47 | 0:18:00 | 0:10:53 | 0:07:07 | smithi | master | centos | 8.1 | rados/standalone/{supported-random-distro$/{centos_8.yaml} workloads/crush.yaml} | 1 | |
fail | 5084034 | 2020-05-23 03:52:34 | 2020-05-23 04:52:47 | 2020-05-23 08:58:53 | 4:06:06 | 3:46:48 | 0:19:18 | smithi | master | ubuntu | 18.04 | rados/upgrade/nautilus-x-singleton/{0-cluster/{openstack.yaml start.yaml} 1-install/nautilus.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml} 5-workload/{radosbench.yaml rbd_api.yaml} 6-finish-upgrade.yaml 7-octopus.yaml 8-workload/{rbd-python.yaml snaps-many-objects.yaml} bluestore-bitmap.yaml thrashosds-health.yaml ubuntu_latest.yaml} | 4 | |
Failure Reason:
failed to recover before timeout expired |
||||||||||||||
pass | 5084035 | 2020-05-23 03:52:35 | 2020-05-23 04:52:48 | 2020-05-23 05:18:48 | 0:26:00 | 0:18:59 | 0:07:01 | smithi | master | centos | 8.1 | rados/valgrind-leaks/{1-start.yaml 2-inject-leak/mon.yaml centos_latest.yaml} | 1 | |
pass | 5084036 | 2020-05-23 03:52:36 | 2020-05-23 04:54:53 | 2020-05-23 05:24:53 | 0:30:00 | 0:23:37 | 0:06:23 | smithi | master | rhel | 8.1 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-async-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/fastclose.yaml msgr/async-v1only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{rhel_8.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/small-objects-balanced.yaml} | 2 | |
pass | 5084037 | 2020-05-23 03:52:36 | 2020-05-23 04:54:53 | 2020-05-23 05:18:53 | 0:24:00 | 0:11:06 | 0:12:54 | smithi | master | ubuntu | 18.04 | rados/thrash-erasure-code-big/{ceph.yaml cluster/{12-osds.yaml openstack.yaml} msgr-failures/few.yaml objectstore/bluestore-comp-zstd.yaml rados.yaml recovery-overrides/{more-async-recovery.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/ec-rados-plugin=lrc-k=4-m=2-l=3.yaml} | 3 | |
pass | 5084038 | 2020-05-23 03:52:37 | 2020-05-23 04:54:53 | 2020-05-23 05:24:53 | 0:30:00 | 0:20:23 | 0:09:37 | smithi | master | centos | 8.1 | rados/verify/{centos_latest.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/none.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-comp-zstd.yaml rados.yaml tasks/rados_api_tests.yaml validater/lockdep.yaml} | 2 | |
pass | 5084039 | 2020-05-23 03:52:38 | 2020-05-23 04:54:55 | 2020-05-23 06:22:57 | 1:28:02 | 0:11:48 | 1:16:14 | smithi | master | ubuntu | 18.04 | rados/thrash-erasure-code-shec/{ceph.yaml clusters/{fixed-4.yaml openstack.yaml} msgr-failures/few.yaml objectstore/bluestore-comp-lz4.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/ec-rados-plugin=shec-k=4-m=3-c=2.yaml} | 4 | |
pass | 5084040 | 2020-05-23 03:52:39 | 2020-05-23 04:56:50 | 2020-05-23 05:18:49 | 0:21:59 | 0:12:30 | 0:09:29 | smithi | master | ubuntu | 18.04 | rados/perf/{ceph.yaml objectstore/bluestore-basic-min-osd-mem-target.yaml openstack.yaml settings/optimized.yaml ubuntu_latest.yaml workloads/sample_fio.yaml} | 1 | |
fail | 5084041 | 2020-05-23 03:52:40 | 2020-05-23 04:56:50 | 2020-05-23 05:10:49 | 0:13:59 | 0:07:39 | 0:06:20 | smithi | master | centos | 8.1 | rados/cephadm/orchestrator_cli/{2-node-mgr.yaml orchestrator_cli.yaml supported-random-distro$/{centos_8.yaml}} | 2 | |
Failure Reason:
Command failed on smithi089 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-mds -f --cluster ceph -i c' |
||||||||||||||
pass | 5084042 | 2020-05-23 03:52:41 | 2020-05-23 04:56:50 | 2020-05-23 05:24:50 | 0:28:00 | 0:21:18 | 0:06:42 | smithi | master | rhel | 8.1 | rados/singleton/{all/osd-recovery-incomplete.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-comp-snappy.yaml rados.yaml supported-random-distro$/{rhel_8.yaml}} | 1 | |
fail | 5084043 | 2020-05-23 03:52:41 | 2020-05-23 04:56:50 | 2020-05-23 05:20:50 | 0:24:00 | 0:12:47 | 0:11:13 | smithi | master | centos | 7.6 | rados/cephadm/smoke/{distro/centos_7.yaml fixed-2.yaml start.yaml} | 2 | |
Failure Reason:
'/home/ubuntu/cephtest/archive/syslog/misc.log:2020-05-23T05:16:32.608833+00:00 smithi099 bash: debug 2020-05-23T05:16:32.607+0000 7fb972063700 -1 log_channel(cephadm) log [ERR] : cephadm exited with an error code: 1, stderr:INFO:cephadm:Deploy daemon prometheus.a ... ' in syslog |
||||||||||||||
pass | 5084044 | 2020-05-23 03:52:42 | 2020-05-23 04:56:55 | 2020-05-23 05:32:55 | 0:36:00 | 0:30:33 | 0:05:27 | smithi | master | centos | 8.1 | rados/singleton-bluestore/{all/cephtool.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-comp-snappy.yaml rados.yaml supported-random-distro$/{centos_8.yaml}} | 1 | |
pass | 5084045 | 2020-05-23 03:52:43 | 2020-05-23 04:58:46 | 2020-05-23 05:40:46 | 0:42:00 | 0:28:14 | 0:13:46 | smithi | master | ubuntu | 18.04 | rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/few.yaml objectstore/bluestore-bitmap.yaml rados.yaml recovery-overrides/{more-async-recovery.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} | 2 | |
pass | 5084046 | 2020-05-23 03:52:44 | 2020-05-23 04:58:46 | 2020-05-23 05:40:46 | 0:42:00 | 0:27:31 | 0:14:29 | smithi | master | ubuntu | 18.04 | rados/monthrash/{ceph.yaml clusters/3-mons.yaml msgr-failures/mon-delay.yaml msgr/async-v1only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/one.yaml workloads/rados_api_tests.yaml} | 2 | |
pass | 5084047 | 2020-05-23 03:52:45 | 2020-05-23 04:58:46 | 2020-05-23 05:38:46 | 0:40:00 | 0:21:11 | 0:18:49 | smithi | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-partial-recovery.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/on.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/small-objects-localized.yaml} | 2 | |
fail | 5084048 | 2020-05-23 03:52:46 | 2020-05-23 04:58:47 | 2020-05-23 05:22:47 | 0:24:00 | 0:12:57 | 0:11:03 | smithi | master | centos | 7.6 | rados/cephadm/smoke-roleless/{distro/centos_7.yaml start.yaml} | 2 | |
Failure Reason:
'/home/ubuntu/cephtest/archive/syslog/misc.log:2020-05-23T05:15:34.088711+00:00 smithi203 bash: debug 2020-05-23T05:15:34.080+0000 7f78ca60c700 -1 log_channel(cephadm) log [ERR] : cephadm exited with an error code: 1, stderr:INFO:cephadm:Deploy daemon alertmanager.smithi203 ... ' in syslog |
||||||||||||||
fail | 5084049 | 2020-05-23 03:52:47 | 2020-05-23 04:58:48 | 2020-05-23 05:38:48 | 0:40:00 | 0:22:43 | 0:17:17 | smithi | master | centos | 7.6 | rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-install/mimic.yaml backoff/normal.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/crush-compat.yaml distro$/{centos_7.6.yaml} msgr-failures/osd-delay.yaml rados.yaml thrashers/default.yaml thrashosds-health.yaml workloads/rbd_cls.yaml} | 3 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 5084050 | 2020-05-23 03:52:47 | 2020-05-23 04:58:51 | 2020-05-23 05:16:50 | 0:17:59 | 0:12:20 | 0:05:39 | smithi | master | centos | 8.1 | rados/singleton/{all/osd-recovery.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/bluestore-comp-zlib.yaml rados.yaml supported-random-distro$/{centos_8.yaml}} | 1 | |
pass | 5084051 | 2020-05-23 03:52:48 | 2020-05-23 04:58:51 | 2020-05-23 05:32:51 | 0:34:00 | 0:28:19 | 0:05:41 | smithi | master | rhel | 8.1 | rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/fast.yaml msgr-failures/few.yaml objectstore/bluestore-hybrid.yaml rados.yaml recovery-overrides/{more-partial-recovery.yaml} supported-random-distro$/{rhel_8.yaml} thrashers/fastread.yaml thrashosds-health.yaml workloads/ec-rados-plugin=clay-k=4-m=2.yaml} | 2 | |
pass | 5084052 | 2020-05-23 03:52:49 | 2020-05-23 04:58:51 | 2020-05-23 05:40:52 | 0:42:01 | 0:29:44 | 0:12:17 | smithi | master | rhel | 8.1 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/fast.yaml msgr-failures/fastclose.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{rhel_8.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/ec-pool-snaps-few-objects-overwrites.yaml} | 2 | |
fail | 5084053 | 2020-05-23 03:52:50 | 2020-05-23 04:58:52 | 2020-05-23 05:18:52 | 0:20:00 | 0:03:07 | 0:16:53 | smithi | master | ubuntu | 18.04 | rados/cephadm/upgrade/{1-start.yaml 2-start-upgrade.yaml 3-wait.yaml distro$/{ubuntu_18.04.yaml} fixed-2.yaml} | 2 | |
Failure Reason:
Command failed on smithi191 with status 5: 'sudo systemctl stop ceph-c2fdb4ac-9cb4-11ea-a06a-001a4aab830c@mon.a' |
||||||||||||||
pass | 5084054 | 2020-05-23 03:52:51 | 2020-05-23 04:58:55 | 2020-05-23 05:12:55 | 0:14:00 | 0:09:06 | 0:04:54 | smithi | master | centos | 8.1 | rados/singleton-nomsgr/{all/balancer.yaml rados.yaml supported-random-distro$/{centos_8.yaml}} | 1 | |
pass | 5084055 | 2020-05-23 03:52:51 | 2020-05-23 05:00:33 | 2020-05-23 05:22:32 | 0:21:59 | 0:10:50 | 0:11:09 | smithi | master | rhel | 8.1 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{rhel_8.yaml} tasks/rados_striper.yaml} | 2 | |
pass | 5084056 | 2020-05-23 03:52:52 | 2020-05-23 05:00:35 | 2020-05-23 05:20:34 | 0:19:59 | 0:10:20 | 0:09:39 | smithi | master | ubuntu | 18.04 | rados/perf/{ceph.yaml objectstore/bluestore-bitmap.yaml openstack.yaml settings/optimized.yaml ubuntu_latest.yaml workloads/sample_radosbench.yaml} | 1 | |
pass | 5084057 | 2020-05-23 03:52:53 | 2020-05-23 05:00:41 | 2020-05-23 05:22:40 | 0:21:59 | 0:12:32 | 0:09:27 | smithi | master | rhel | 8.1 | rados/multimon/{clusters/3.yaml msgr-failures/few.yaml msgr/async-v1only.yaml no_pools.yaml objectstore/bluestore-comp-zstd.yaml rados.yaml supported-random-distro$/{rhel_8.yaml} tasks/mon_recovery.yaml} | 2 | |
fail | 5084058 | 2020-05-23 03:52:54 | 2020-05-23 05:00:42 | 2020-05-23 05:18:41 | 0:17:59 | 0:11:21 | 0:06:38 | smithi | master | centos | 8.1 | rados/cephadm/workunits/{distro/centos_latest.yaml task/test_adoption.yaml} | 1 | |
Failure Reason:
Command failed (workunit test cephadm/test_adoption.sh) on smithi165 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=b3102246570f1a522971013abdaf7c83463c3d9a TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_adoption.sh' |
||||||||||||||
pass | 5084059 | 2020-05-23 03:52:55 | 2020-05-23 05:00:52 | 2020-05-23 05:34:52 | 0:34:00 | 0:22:33 | 0:11:27 | smithi | master | centos | 8.1 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/osd-delay.yaml msgr/async.yaml objectstore/bluestore-comp-lz4.yaml rados.yaml supported-random-distro$/{centos_8.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/small-objects.yaml} | 2 | |
pass | 5084060 | 2020-05-23 03:52:56 | 2020-05-23 05:00:55 | 2020-05-23 05:14:55 | 0:14:00 | 0:07:55 | 0:06:05 | smithi | master | centos | 8.1 | rados/singleton/{all/peer.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-comp-zstd.yaml rados.yaml supported-random-distro$/{centos_8.yaml}} | 1 | |
pass | 5084061 | 2020-05-23 03:52:56 | 2020-05-23 05:01:02 | 2020-05-23 05:25:02 | 0:24:00 | 0:12:31 | 0:11:29 | smithi | master | centos | 8.0 | rados/cephadm/smoke/{distro/centos_8.0.yaml fixed-2.yaml start.yaml} | 2 | |
fail | 5084062 | 2020-05-23 03:52:57 | 2020-05-23 05:02:56 | 2020-05-23 05:20:56 | 0:18:00 | 0:10:07 | 0:07:53 | smithi | master | rhel | 8.1 | rados/mgr/{clusters/{2-node-mgr.yaml} debug/mgr.yaml objectstore/bluestore-comp-snappy.yaml supported-random-distro$/{rhel_8.yaml} tasks/crash.yaml} | 2 | |
Failure Reason:
Command failed on smithi098 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-mds -f --cluster ceph -i a' |
||||||||||||||
fail | 5084063 | 2020-05-23 03:52:58 | 2020-05-23 05:02:57 | 2020-05-23 05:48:57 | 0:46:00 | 0:37:39 | 0:08:21 | smithi | master | centos | 8.0 | rados/cephadm/smoke-roleless/{distro/centos_8.0.yaml start.yaml} | 2 | |
Failure Reason:
'/home/ubuntu/cephtest/archive/syslog/misc.log:2020-05-23T05:20:09.055112+00:00 smithi114 bash[22024]: debug 2020-05-23T05:20:09.054+0000 7f3003849700 -1 log_channel(cephadm) log [ERR] : cephadm exited with an error code: 1, stderr:INFO:cephadm:Deploy daemon alertmanager.smithi114 ... ' in syslog |
||||||||||||||
pass | 5084064 | 2020-05-23 03:52:59 | 2020-05-23 05:02:57 | 2020-05-23 05:56:57 | 0:54:00 | 0:11:55 | 0:42:05 | smithi | master | ubuntu | 18.04 | rados/thrash-erasure-code-shec/{ceph.yaml clusters/{fixed-4.yaml openstack.yaml} msgr-failures/osd-delay.yaml objectstore/bluestore-comp-snappy.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/ec-rados-plugin=shec-k=4-m=3-c=2.yaml} | 4 | |
fail | 5084065 | 2020-05-23 03:53:00 | 2020-05-23 05:12:44 | 2020-05-23 05:30:43 | 0:17:59 | 0:11:06 | 0:06:53 | smithi | master | centos | 8.1 | rados/singleton/{all/pg-autoscaler.yaml msgr-failures/many.yaml msgr/async-v2only.yaml objectstore/bluestore-hybrid.yaml rados.yaml supported-random-distro$/{centos_8.yaml}} | 2 | |
Failure Reason:
"2020-05-23T05:28:58.403599+0000 mgr.x (mgr.4103) 100 : cluster [ERR] Unhandled exception from module 'pg_autoscaler' while running on mgr.x: division by zero" in cluster log |
||||||||||||||
pass | 5084066 | 2020-05-23 03:53:01 | 2020-05-23 05:12:56 | 2020-05-23 06:00:56 | 0:48:00 | 0:28:27 | 0:19:33 | smithi | master | ubuntu | 18.04 | rados/thrash-erasure-code-big/{ceph.yaml cluster/{12-osds.yaml openstack.yaml} msgr-failures/osd-delay.yaml objectstore/bluestore-hybrid.yaml rados.yaml recovery-overrides/{more-async-recovery.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/ec-rados-plugin=jerasure-k=4-m=2.yaml} | 3 | |
dead | 5084067 | 2020-05-23 03:53:01 | 2020-05-23 05:12:56 | 2020-05-23 17:15:30 | 12:02:34 | 11:54:51 | 0:07:43 | smithi | master | centos | 8.1 | rados/verify/{centos_latest.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/default/{default.yaml thrashosds-health.yaml} msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-hybrid.yaml rados.yaml tasks/rados_cls_all.yaml validater/valgrind.yaml} | — | |
Failure Reason:
psutil.NoSuchProcess process no longer exists (pid=14368) |
||||||||||||||
pass | 5084068 | 2020-05-23 03:53:02 | 2020-05-23 05:14:51 | 2020-05-23 05:50:51 | 0:36:00 | 0:24:17 | 0:11:43 | smithi | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-async-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/on.yaml msgr-failures/fastclose.yaml msgr/async-v1only.yaml objectstore/bluestore-comp-snappy.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/none.yaml thrashosds-health.yaml workloads/snaps-few-objects-balanced.yaml} | 2 | |
fail | 5084069 | 2020-05-23 03:53:03 | 2020-05-23 05:14:51 | 2020-05-23 05:32:50 | 0:17:59 | 0:07:27 | 0:10:32 | smithi | master | centos | 8.1 | rados/dashboard/{clusters/{2-node-mgr.yaml} debug/mgr.yaml objectstore/bluestore-comp-lz4.yaml supported-random-distro$/{centos_8.yaml} tasks/dashboard.yaml} | 2 | |
Failure Reason:
Command failed on smithi158 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-mds -f --cluster ceph -i a' |
||||||||||||||
pass | 5084070 | 2020-05-23 03:53:04 | 2020-05-23 05:14:51 | 2020-05-23 05:34:50 | 0:19:59 | 0:12:19 | 0:07:40 | smithi | master | ubuntu | 18.04 | rados/objectstore/{backends/ceph_objectstore_tool.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |
fail | 5084071 | 2020-05-23 03:53:05 | 2020-05-23 05:14:51 | 2020-05-23 05:30:50 | 0:15:59 | 0:07:49 | 0:08:10 | smithi | master | centos | 8.1 | rados/singleton-nomsgr/{all/cache-fs-trunc.yaml rados.yaml supported-random-distro$/{centos_8.yaml}} | 1 | |
Failure Reason:
Command failed on smithi079 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-mds -f --cluster ceph -i a' |
||||||||||||||
fail | 5084072 | 2020-05-23 03:53:06 | 2020-05-23 05:14:56 | 2020-05-23 05:54:56 | 0:40:00 | 0:33:14 | 0:06:46 | smithi | master | rhel | 8.0 | rados/cephadm/with-work/{distro/rhel_8.0.yaml fixed-2.yaml mode/packaged.yaml msgr/async.yaml start.yaml tasks/rados_api_tests.yaml} | 2 | |
Failure Reason:
'/home/ubuntu/cephtest/archive/syslog/misc.log:2020-05-23T05:39:43.678173+00:00 smithi072 bash[30779]: debug 2020-05-23T05:39:43.677+0000 7f8b97092700 -1 log_channel(cephadm) log [ERR] : cephadm exited with an error code: 1, stderr:INFO:cephadm:Deploy daemon prometheus.a ... ' in syslog |
||||||||||||||
pass | 5084073 | 2020-05-23 03:53:06 | 2020-05-23 05:17:06 | 2020-05-23 05:43:06 | 0:26:00 | 0:16:38 | 0:09:22 | smithi | master | ubuntu | 18.04 | rados/perf/{ceph.yaml objectstore/bluestore-bitmap.yaml openstack.yaml settings/optimized.yaml ubuntu_latest.yaml workloads/cosbench_64K_read_write.yaml} | 1 | |
pass | 5084074 | 2020-05-23 03:53:07 | 2020-05-23 05:17:06 | 2020-05-23 05:53:06 | 0:36:00 | 0:27:56 | 0:08:04 | smithi | master | rhel | 8.1 | rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/osd-delay.yaml objectstore/bluestore-comp-lz4.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{rhel_8.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} | 2 | |
pass | 5084075 | 2020-05-23 03:53:08 | 2020-05-23 05:18:57 | 2020-05-23 06:12:58 | 0:54:01 | 0:42:40 | 0:11:21 | smithi | master | ubuntu | 18.04 | rados/monthrash/{ceph.yaml clusters/9-mons.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/sync-many.yaml workloads/rados_mon_osdmap_prune.yaml} | 2 | |
fail | 5084076 | 2020-05-23 03:53:09 | 2020-05-23 05:18:57 | 2020-05-23 05:34:57 | 0:16:00 | 0:07:50 | 0:08:10 | smithi | master | ubuntu | 18.04 | rados/cephadm/workunits/{distro/ubuntu_18.04_podman.yaml task/test_cephadm.yaml} | 1 | |
Failure Reason:
Command failed (workunit test cephadm/test_cephadm.sh) on smithi138 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=b3102246570f1a522971013abdaf7c83463c3d9a TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh' |
||||||||||||||
pass | 5084077 | 2020-05-23 03:53:10 | 2020-05-23 05:18:57 | 2020-05-23 05:34:57 | 0:16:00 | 0:10:31 | 0:05:29 | smithi | master | rhel | 8.1 | rados/singleton/{all/pg-removal-interruption.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{rhel_8.yaml}} | 1 | |
pass | 5084078 | 2020-05-23 03:53:11 | 2020-05-23 05:18:58 | 2020-05-23 06:00:58 | 0:42:00 | 0:36:03 | 0:05:57 | smithi | master | rhel | 8.1 | rados/standalone/{supported-random-distro$/{rhel_8.yaml} workloads/erasure-code.yaml} | 1 | |
pass | 5084079 | 2020-05-23 03:53:11 | 2020-05-23 05:18:58 | 2020-05-23 06:00:58 | 0:42:00 | 0:31:25 | 0:10:35 | smithi | master | centos | 8.1 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/many.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{centos_8.yaml} tasks/rados_workunit_loadgen_big.yaml} | 2 | |
pass | 5084080 | 2020-05-23 03:53:12 | 2020-05-23 05:18:58 | 2020-05-23 05:58:58 | 0:40:00 | 0:31:31 | 0:08:29 | smithi | master | centos | 8.1 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-comp-zlib.yaml rados.yaml supported-random-distro$/{centos_8.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/snaps-few-objects-localized.yaml} | 2 | |
pass | 5084081 | 2020-05-23 03:53:13 | 2020-05-23 05:18:58 | 2020-05-23 05:42:57 | 0:23:59 | 0:12:20 | 0:11:39 | smithi | master | centos | 8.1 | rados/cephadm/smoke/{distro/centos_latest.yaml fixed-2.yaml start.yaml} | 2 | |
pass | 5084082 | 2020-05-23 03:53:14 | 2020-05-23 05:18:58 | 2020-05-23 05:52:58 | 0:34:00 | 0:24:02 | 0:09:58 | smithi | master | centos | 8.1 | rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/normal.yaml msgr-failures/osd-delay.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml recovery-overrides/{more-async-partial-recovery.yaml} supported-random-distro$/{centos_8.yaml} thrashers/minsize_recovery.yaml thrashosds-health.yaml workloads/ec-rados-plugin=jerasure-k=2-m=1.yaml} | 2 | |
fail | 5084083 | 2020-05-23 03:53:15 | 2020-05-23 05:19:03 | 2020-05-23 06:15:04 | 0:56:01 | 0:25:37 | 0:30:24 | smithi | master | centos | 7.6 | rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-install/nautilus-v1only.yaml backoff/peering.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/on.yaml distro$/{centos_7.6.yaml} msgr-failures/fastclose.yaml rados.yaml thrashers/mapgap.yaml thrashosds-health.yaml workloads/snaps-few-objects.yaml} | 3 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 5084084 | 2020-05-23 03:53:16 | 2020-05-23 05:19:07 | 2020-05-23 05:43:07 | 0:24:00 | 0:17:47 | 0:06:13 | smithi | master | rhel | 8.1 | rados/singleton/{all/radostool.yaml msgr-failures/many.yaml msgr/async-v1only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{rhel_8.yaml}} | 1 | |
fail | 5084085 | 2020-05-23 03:53:17 | 2020-05-23 05:20:50 | 2020-05-23 05:58:50 | 0:38:00 | 0:31:07 | 0:06:53 | smithi | master | centos | 8.1 | rados/cephadm/smoke-roleless/{distro/centos_latest.yaml start.yaml} | 2 | |
Failure Reason:
timeout expired in wait_until_healthy |
||||||||||||||
pass | 5084086 | 2020-05-23 03:53:17 | 2020-05-23 05:20:50 | 2020-05-23 05:50:50 | 0:30:00 | 0:10:10 | 0:19:50 | smithi | master | rhel | 8.1 | rados/multimon/{clusters/6.yaml msgr-failures/many.yaml msgr/async-v2only.yaml no_pools.yaml objectstore/bluestore-hybrid.yaml rados.yaml supported-random-distro$/{rhel_8.yaml} tasks/mon_clock_no_skews.yaml} | 2 | |
pass | 5084087 | 2020-05-23 03:53:18 | 2020-05-23 05:20:51 | 2020-05-23 05:46:51 | 0:26:00 | 0:16:35 | 0:09:25 | smithi | master | ubuntu | 18.04 | rados/perf/{ceph.yaml objectstore/bluestore-comp.yaml openstack.yaml settings/optimized.yaml ubuntu_latest.yaml workloads/cosbench_64K_write.yaml} | 1 | |
pass | 5084088 | 2020-05-23 03:53:19 | 2020-05-23 05:20:55 | 2020-05-23 05:36:54 | 0:15:59 | 0:08:42 | 0:07:17 | smithi | master | centos | 8.1 | rados/singleton-nomsgr/{all/ceph-kvstore-tool.yaml rados.yaml supported-random-distro$/{centos_8.yaml}} | 1 | |
fail | 5084089 | 2020-05-23 03:53:20 | 2020-05-23 05:20:57 | 2020-05-23 05:50:57 | 0:30:00 | 0:22:30 | 0:07:30 | smithi | master | rhel | 8.1 | rados/cephadm/with-work/{distro/rhel_latest.yaml fixed-2.yaml mode/root.yaml msgr/async-v1only.yaml start.yaml tasks/rados_python.yaml} | 2 | |
Failure Reason:
'/home/ubuntu/cephtest/archive/syslog/misc.log:2020-05-23T05:43:20.798733+00:00 smithi109 bash[25730]: debug 2020-05-23T05:43:20.797+0000 7f6fd303d700 -1 log_channel(cephadm) log [ERR] : cephadm exited with an error code: 1, stderr:INFO:cephadm:Deploy daemon prometheus.a ... ' in syslog |
||||||||||||||
pass | 5084090 | 2020-05-23 03:53:21 | 2020-05-23 05:22:49 | 2020-05-23 05:58:49 | 0:36:00 | 0:26:59 | 0:09:01 | smithi | master | centos | 8.1 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/on.yaml msgr-failures/osd-delay.yaml msgr/async.yaml objectstore/bluestore-comp-zstd.yaml rados.yaml supported-random-distro$/{centos_8.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/snaps-few-objects.yaml} | 2 | |
fail | 5084091 | 2020-05-23 03:53:22 | 2020-05-23 05:22:49 | 2020-05-23 05:46:49 | 0:24:00 | 0:07:40 | 0:16:20 | smithi | master | centos | 8.1 | rados/mgr/{clusters/{2-node-mgr.yaml} debug/mgr.yaml objectstore/bluestore-comp-zlib.yaml supported-random-distro$/{centos_8.yaml} tasks/failover.yaml} | 2 | |
Failure Reason:
Command failed on smithi041 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-mds -f --cluster ceph -i a' |
||||||||||||||
pass | 5084092 | 2020-05-23 03:53:23 | 2020-05-23 05:22:50 | 2020-05-23 06:06:50 | 0:44:00 | 0:28:53 | 0:15:07 | smithi | master | rhel | 8.1 | rados/singleton/{all/random-eio.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{rhel_8.yaml}} | 2 | |
pass | 5084093 | 2020-05-23 03:53:24 | 2020-05-23 05:22:50 | 2020-05-23 05:48:49 | 0:25:59 | 0:16:25 | 0:09:34 | smithi | master | rhel | 7.7 | rados/cephadm/smoke/{distro/rhel_7.yaml fixed-2.yaml start.yaml} | 2 | |
pass | 5084094 | 2020-05-23 03:53:24 | 2020-05-23 05:22:53 | 2020-05-23 05:54:53 | 0:32:00 | 0:24:19 | 0:07:41 | smithi | master | rhel | 8.1 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/normal.yaml msgr-failures/few.yaml rados.yaml recovery-overrides/{more-partial-recovery.yaml} supported-random-distro$/{rhel_8.yaml} thrashers/fastread.yaml thrashosds-health.yaml workloads/ec-small-objects-fast-read-overwrites.yaml} | 2 | |
pass | 5084095 | 2020-05-23 03:53:25 | 2020-05-23 05:23:03 | 2020-05-23 07:13:05 | 1:50:02 | 0:11:23 | 1:38:39 | smithi | master | centos | 8.1 | rados/thrash-erasure-code-shec/{ceph.yaml clusters/{fixed-4.yaml openstack.yaml} msgr-failures/fastclose.yaml objectstore/bluestore-comp-zlib.yaml rados.yaml recovery-overrides/{more-async-partial-recovery.yaml} supported-random-distro$/{centos_8.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/ec-rados-plugin=shec-k=4-m=3-c=2.yaml} | 4 | |
pass | 5084096 | 2020-05-23 03:53:26 | 2020-05-23 05:24:31 | 2020-05-23 05:54:30 | 0:29:59 | 0:10:43 | 0:19:16 | smithi | master | centos | 8.1 | rados/thrash-erasure-code-big/{ceph.yaml cluster/{12-osds.yaml openstack.yaml} msgr-failures/fastclose.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml recovery-overrides/{more-partial-recovery.yaml} supported-random-distro$/{centos_8.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/ec-rados-plugin=lrc-k=4-m=2-l=3.yaml} | 3 | |
pass | 5084097 | 2020-05-23 03:53:27 | 2020-05-23 05:24:51 | 2020-05-23 05:46:50 | 0:21:59 | 0:10:38 | 0:11:21 | smithi | master | centos | 8.1 | rados/verify/{centos_latest.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/none.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml tasks/mon_recovery.yaml validater/lockdep.yaml} | 2 | |
fail | 5084098 | 2020-05-23 03:53:28 | 2020-05-23 05:24:54 | 2020-05-23 06:10:54 | 0:46:00 | 0:30:36 | 0:15:24 | smithi | master | rhel | 7.7 | rados/cephadm/smoke-roleless/{distro/rhel_7.yaml start.yaml} | 2 | |
Failure Reason:
timeout expired in wait_until_healthy |
||||||||||||||
pass | 5084099 | 2020-05-23 03:53:28 | 2020-05-23 05:24:54 | 2020-05-23 06:14:54 | 0:50:00 | 0:18:08 | 0:31:52 | smithi | master | rhel | 8.1 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-async-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/fastclose.yaml msgr/async-v1only.yaml objectstore/bluestore-hybrid.yaml rados.yaml supported-random-distro$/{rhel_8.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/write_fadvise_dontneed.yaml} | 2 | |
pass | 5084100 | 2020-05-23 03:53:29 | 2020-05-23 05:24:57 | 2020-05-23 05:48:56 | 0:23:59 | 0:13:33 | 0:10:26 | smithi | master | centos | 8.1 | rados/singleton/{all/rebuild-mondb.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{centos_8.yaml}} | 1 | |
fail | 5084101 | 2020-05-23 03:53:30 | 2020-05-23 05:25:03 | 2020-05-23 05:51:03 | 0:26:00 | 0:10:34 | 0:15:26 | smithi | master | rhel | 8.1 | rados/dashboard/{clusters/{2-node-mgr.yaml} debug/mgr.yaml objectstore/bluestore-comp-snappy.yaml supported-random-distro$/{rhel_8.yaml} tasks/dashboard.yaml} | 2 | |
Failure Reason:
Command failed on smithi068 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-mds -f --cluster ceph -i b' |
||||||||||||||
pass | 5084102 | 2020-05-23 03:53:31 | 2020-05-23 05:26:50 | 2020-05-23 05:46:50 | 0:20:00 | 0:07:51 | 0:12:09 | smithi | master | centos | 8.1 | rados/objectstore/{backends/filejournal.yaml supported-random-distro$/{centos_8.yaml}} | 1 | |
pass | 5084103 | 2020-05-23 03:53:32 | 2020-05-23 05:26:51 | 2020-05-23 05:42:50 | 0:15:59 | 0:04:36 | 0:11:23 | smithi | master | centos | 8.1 | rados/cephadm/workunits/{distro/centos_latest.yaml task/test_cephadm_repos.yaml} | 1 | |
pass | 5084104 | 2020-05-23 03:53:33 | 2020-05-23 05:30:59 | 2020-05-23 05:52:59 | 0:22:00 | 0:12:30 | 0:09:30 | smithi | master | ubuntu | 18.04 | rados/perf/{ceph.yaml objectstore/bluestore-low-osd-mem-target.yaml openstack.yaml settings/optimized.yaml ubuntu_latest.yaml workloads/fio_4K_rand_read.yaml} | 1 | |
pass | 5084105 | 2020-05-23 03:53:33 | 2020-05-23 05:30:59 | 2020-05-23 06:15:00 | 0:44:01 | 0:20:55 | 0:23:06 | smithi | master | centos | 8.1 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-comp-lz4.yaml rados.yaml supported-random-distro$/{centos_8.yaml} tasks/rados_workunit_loadgen_mix.yaml} | 2 | |
pass | 5084106 | 2020-05-23 03:53:34 | 2020-05-23 05:31:00 | 2020-05-23 06:09:00 | 0:38:00 | 0:27:57 | 0:10:03 | smithi | master | ubuntu | 18.04 | rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/fastclose.yaml objectstore/bluestore-comp-snappy.yaml rados.yaml recovery-overrides/{more-partial-recovery.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} | 2 | |
pass | 5084107 | 2020-05-23 03:53:35 | 2020-05-23 05:32:40 | 2020-05-23 06:24:40 | 0:52:00 | 0:28:07 | 0:23:53 | smithi | master | rhel | 8.1 | rados/monthrash/{ceph.yaml clusters/3-mons.yaml msgr-failures/mon-delay.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{rhel_8.yaml} thrashers/sync.yaml workloads/rados_mon_workunits.yaml} | 2 | |
pass | 5084108 | 2020-05-23 03:53:36 | 2020-05-23 05:32:51 | 2020-05-23 05:50:51 | 0:18:00 | 0:07:05 | 0:10:55 | smithi | master | ubuntu | 18.04 | rados/singleton-nomsgr/{all/ceph-post-file.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |
fail | 5084109 | 2020-05-23 03:53:37 | 2020-05-23 05:32:52 | 2020-05-23 06:22:53 | 0:50:01 | 0:31:26 | 0:18:35 | smithi | master | ubuntu | 18.04 | rados/cephadm/with-work/{distro/ubuntu_18.04.yaml fixed-2.yaml mode/packaged.yaml msgr/async-v2only.yaml start.yaml tasks/rados_api_tests.yaml} | 2 | |
Failure Reason:
'/home/ubuntu/cephtest/archive/syslog/misc.log:2020-05-23T06:02:38.185397+00:00 smithi044 bash[13314]: debug 2020-05-23T06:02:38.182+0000 7f3f6f299700 -1 log_channel(cephadm) log [ERR] : cephadm exited with an error code: 1, stderr:INFO:cephadm:Deploy daemon prometheus.a ... ' in syslog |
||||||||||||||
pass | 5084110 | 2020-05-23 03:53:38 | 2020-05-23 05:32:56 | 2020-05-23 06:04:56 | 0:32:00 | 0:22:33 | 0:09:27 | smithi | master | rhel | 8.1 | rados/singleton/{all/recovery-preemption.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-comp-lz4.yaml rados.yaml supported-random-distro$/{rhel_8.yaml}} | 1 | |
pass | 5084111 | 2020-05-23 03:53:39 | 2020-05-23 05:32:56 | 2020-05-23 06:12:57 | 0:40:01 | 0:27:20 | 0:12:41 | smithi | master | centos | 8.1 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/on.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{centos_8.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/admin_socket_objecter_requests.yaml} | 2 | |
pass | 5084112 | 2020-05-23 03:53:39 | 2020-05-23 05:34:30 | 2020-05-23 06:26:30 | 0:52:00 | 0:30:00 | 0:22:00 | smithi | master | centos | 8.1 | rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/fast.yaml msgr-failures/fastclose.yaml objectstore/bluestore-stupid.yaml rados.yaml recovery-overrides/{more-async-partial-recovery.yaml} supported-random-distro$/{centos_8.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/ec-rados-plugin=jerasure-k=3-m=1.yaml} | 2 | |
pass | 5084113 | 2020-05-23 03:53:40 | 2020-05-23 05:34:51 | 2020-05-23 06:10:51 | 0:36:00 | 0:29:05 | 0:06:55 | smithi | master | centos | 8.1 | rados/singleton-bluestore/{all/cephtool.yaml msgr-failures/many.yaml msgr/async-v2only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{centos_8.yaml}} | 1 | |
pass | 5084114 | 2020-05-23 03:53:41 | 2020-05-23 05:34:53 | 2020-05-23 06:06:53 | 0:32:00 | 0:15:19 | 0:16:41 | smithi | master | rhel | 8.0 | rados/cephadm/smoke/{distro/rhel_8.0.yaml fixed-2.yaml start.yaml} | 2 | |
pass | 5084115 | 2020-05-23 03:53:42 | 2020-05-23 05:34:58 | 2020-05-23 06:12:58 | 0:38:00 | 0:22:15 | 0:15:45 | smithi | master | centos | 7.6 | rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-install/nautilus-v2only.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/crush-compat.yaml distro$/{centos_7.6.yaml} msgr-failures/few.yaml rados.yaml thrashers/morepggrow.yaml thrashosds-health.yaml workloads/test_rbd_api.yaml} | 3 | |
pass | 5084116 | 2020-05-23 03:53:43 | 2020-05-23 05:34:58 | 2020-05-23 06:12:58 | 0:38:00 | 0:07:54 | 0:30:06 | smithi | master | ubuntu | 18.04 | rados/multimon/{clusters/9.yaml msgr-failures/few.yaml msgr/async.yaml no_pools.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/mon_clock_with_skews.yaml} | 3 | |
fail | 5084117 | 2020-05-23 03:53:43 | 2020-05-23 05:37:09 | 2020-05-23 06:17:10 | 0:40:01 | 0:31:54 | 0:08:07 | smithi | master | rhel | 8.0 | rados/cephadm/smoke-roleless/{distro/rhel_8.0.yaml start.yaml} | 2 | |
Failure Reason:
timeout expired in wait_until_healthy |
||||||||||||||
pass | 5084118 | 2020-05-23 03:53:44 | 2020-05-23 05:37:09 | 2020-05-23 05:55:09 | 0:18:00 | 0:08:39 | 0:09:21 | smithi | master | centos | 8.1 | rados/singleton/{all/resolve_stuck_peering.yaml msgr-failures/many.yaml msgr/async-v2only.yaml objectstore/bluestore-comp-snappy.yaml rados.yaml supported-random-distro$/{centos_8.yaml}} | 2 | |
pass | 5084119 | 2020-05-23 03:53:45 | 2020-05-23 05:39:02 | 2020-05-23 06:01:02 | 0:22:00 | 0:12:22 | 0:09:38 | smithi | master | ubuntu | 18.04 | rados/perf/{ceph.yaml objectstore/bluestore-stupid.yaml openstack.yaml settings/optimized.yaml ubuntu_latest.yaml workloads/fio_4K_rand_rw.yaml} | 1 | |
fail | 5084120 | 2020-05-23 03:53:46 | 2020-05-23 05:39:02 | 2020-05-23 05:57:02 | 0:18:00 | 0:10:07 | 0:07:53 | smithi | master | rhel | 8.1 | rados/mgr/{clusters/{2-node-mgr.yaml} debug/mgr.yaml objectstore/bluestore-comp-zstd.yaml supported-random-distro$/{rhel_8.yaml} tasks/insights.yaml} | 2 | |
Failure Reason:
Command failed on smithi132 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-mds -f --cluster ceph -i a' |
||||||||||||||
pass | 5084121 | 2020-05-23 03:53:47 | 2020-05-23 05:39:02 | 2020-05-23 06:07:02 | 0:28:00 | 0:19:23 | 0:08:37 | smithi | master | centos | 8.1 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-partial-recovery.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/osd-delay.yaml msgr/async.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{centos_8.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/cache-agent-big.yaml} | 2 | |
fail | 5084122 | 2020-05-23 03:53:48 | 2020-05-23 05:40:57 | 2020-05-23 06:02:57 | 0:22:00 | 0:07:48 | 0:14:12 | smithi | master | ubuntu | 18.04 | rados/cephadm/workunits/{distro/ubuntu_18.04_podman.yaml task/test_adoption.yaml} | 1 | |
Failure Reason:
Command failed (workunit test cephadm/test_adoption.sh) on smithi157 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=b3102246570f1a522971013abdaf7c83463c3d9a TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_adoption.sh' |
||||||||||||||
pass | 5084123 | 2020-05-23 03:53:48 | 2020-05-23 05:40:57 | 2020-05-23 06:02:57 | 0:22:00 | 0:10:53 | 0:11:07 | smithi | master | ubuntu | 18.04 | rados/standalone/{supported-random-distro$/{ubuntu_latest.yaml} workloads/mgr.yaml} | 1 | |
pass | 5084124 | 2020-05-23 03:53:49 | 2020-05-23 05:40:57 | 2020-05-23 07:14:59 | 1:34:02 | 0:12:08 | 1:21:54 | smithi | master | centos | 8.1 | rados/thrash-erasure-code-shec/{ceph.yaml clusters/{fixed-4.yaml openstack.yaml} msgr-failures/few.yaml objectstore/bluestore-comp-zstd.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{centos_8.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/ec-rados-plugin=shec-k=4-m=3-c=2.yaml} | 4 | |
pass | 5084125 | 2020-05-23 03:53:50 | 2020-05-23 05:40:57 | 2020-05-23 05:54:57 | 0:14:00 | 0:07:55 | 0:06:05 | smithi | master | centos | 8.1 | rados/singleton-nomsgr/{all/export-after-evict.yaml rados.yaml supported-random-distro$/{centos_8.yaml}} | 1 | |
fail | 5084126 | 2020-05-23 03:53:51 | 2020-05-23 05:40:57 | 2020-05-23 06:26:58 | 0:46:01 | 0:22:29 | 0:23:32 | smithi | master | ubuntu | 18.04 | rados/cephadm/with-work/{distro/ubuntu_18.04_podman.yaml fixed-2.yaml mode/root.yaml msgr/async.yaml start.yaml tasks/rados_python.yaml} | 2 | |
Failure Reason:
'/home/ubuntu/cephtest/archive/syslog/misc.log:2020-05-23T06:16:53.384119+00:00 smithi150 bash[18053]: debug 2020-05-23T06:16:53.379+0000 7f127005e700 -1 log_channel(cephadm) log [ERR] : cephadm exited with an error code: 1, stderr:INFO:cephadm:Deploy daemon prometheus.a ... ' in syslog |
||||||||||||||
pass | 5084127 | 2020-05-23 03:53:52 | 2020-05-23 05:41:04 | 2020-05-23 05:57:03 | 0:15:59 | 0:09:17 | 0:06:42 | smithi | master | centos | 8.1 | rados/singleton/{all/test-crash.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-comp-zlib.yaml rados.yaml supported-random-distro$/{centos_8.yaml}} | 1 | |
pass | 5084128 | 2020-05-23 03:53:52 | 2020-05-23 05:43:06 | 2020-05-23 06:27:06 | 0:44:00 | 0:28:15 | 0:15:45 | smithi | master | rhel | 8.1 | rados/thrash-erasure-code-big/{ceph.yaml cluster/{12-osds.yaml openstack.yaml} msgr-failures/few.yaml objectstore/bluestore-stupid.yaml rados.yaml recovery-overrides/{more-async-partial-recovery.yaml} supported-random-distro$/{rhel_8.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/ec-rados-plugin=jerasure-k=4-m=2.yaml} | 3 | |
fail | 5084129 | 2020-05-23 03:53:53 | 2020-05-23 05:43:06 | 2020-05-23 06:59:07 | 1:16:01 | 1:06:50 | 0:09:11 | smithi | master | centos | 8.1 | rados/verify/{centos_latest.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/default/{default.yaml thrashosds-health.yaml} msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-stupid.yaml rados.yaml tasks/rados_api_tests.yaml validater/valgrind.yaml} | 2 | |
Failure Reason:
Command failed (workunit test rados/test.sh) on smithi041 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=b3102246570f1a522971013abdaf7c83463c3d9a TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 ALLOW_TIMEOUTS=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh' |
||||||||||||||
pass | 5084130 | 2020-05-23 03:53:54 | 2020-05-23 05:43:07 | 2020-05-23 06:25:07 | 0:42:00 | 0:30:51 | 0:11:09 | smithi | master | ubuntu | 18.04 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/many.yaml msgr/async-v2only.yaml objectstore/bluestore-comp-snappy.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/rados_workunit_loadgen_mostlyread.yaml} | 2 | |
pass | 5084131 | 2020-05-23 03:53:55 | 2020-05-23 05:43:08 | 2020-05-23 06:05:08 | 0:22:00 | 0:13:13 | 0:08:47 | smithi | master | rhel | 8.1 | rados/cephadm/smoke/{distro/rhel_latest.yaml fixed-2.yaml start.yaml} | 2 | |
pass | 5084132 | 2020-05-23 03:53:56 | 2020-05-23 05:45:06 | 2020-05-23 06:09:06 | 0:24:00 | 0:12:07 | 0:11:53 | smithi | master | centos | 8.1 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-partial-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/on.yaml msgr-failures/fastclose.yaml msgr/async-v1only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{centos_8.yaml} thrashers/none.yaml thrashosds-health.yaml workloads/cache-agent-small.yaml} | 2 | |
pass | 5084133 | 2020-05-23 03:53:57 | 2020-05-23 05:46:56 | 2020-05-23 06:22:56 | 0:36:00 | 0:25:37 | 0:10:23 | smithi | master | rhel | 8.1 | rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/few.yaml objectstore/bluestore-comp-zlib.yaml rados.yaml recovery-overrides/{more-async-recovery.yaml} supported-random-distro$/{rhel_8.yaml} thrashers/none.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} | 2 | |
fail | 5084134 | 2020-05-23 03:53:57 | 2020-05-23 05:46:56 | 2020-05-23 06:38:56 | 0:52:00 | 0:40:49 | 0:11:11 | smithi | master | rhel | 8.1 | rados/cephadm/smoke-roleless/{distro/rhel_latest.yaml start.yaml} | 2 | |
Failure Reason:
'/home/ubuntu/cephtest/archive/syslog/misc.log:2020-05-23T06:06:59.003429+00:00 smithi203 bash[22004]: debug 2020-05-23T06:06:59.002+0000 7f08d39d2700 -1 log_channel(cephadm) log [ERR] : cephadm exited with an error code: 1, stderr:INFO:cephadm:Deploy daemon alertmanager.smithi203 ... ' in syslog |
||||||||||||||
fail | 5084135 | 2020-05-23 03:53:58 | 2020-05-23 05:46:56 | 2020-05-23 06:28:56 | 0:42:00 | 0:31:34 | 0:10:26 | smithi | master | rhel | 8.1 | rados/monthrash/{ceph.yaml clusters/9-mons.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{rhel_8.yaml} thrashers/force-sync-many.yaml workloads/rados_mon_workunits.yaml} | 2 | |
Failure Reason:
failed to reach quorum size 9 before timeout expired |
||||||||||||||
pass | 5084136 | 2020-05-23 03:53:59 | 2020-05-23 05:46:56 | 2020-05-23 06:12:56 | 0:26:00 | 0:18:35 | 0:07:25 | smithi | master | rhel | 8.1 | rados/singleton/{all/test_envlibrados_for_rocksdb.yaml msgr-failures/many.yaml msgr/async-v1only.yaml objectstore/bluestore-comp-zstd.yaml rados.yaml supported-random-distro$/{rhel_8.yaml}} | 1 | |
pass | 5084137 | 2020-05-23 03:54:00 | 2020-05-23 05:46:56 | 2020-05-23 06:08:56 | 0:22:00 | 0:10:43 | 0:11:17 | smithi | master | ubuntu | 18.04 | rados/perf/{ceph.yaml objectstore/bluestore-basic-min-osd-mem-target.yaml openstack.yaml settings/optimized.yaml ubuntu_latest.yaml workloads/fio_4M_rand_read.yaml} | 1 | |
fail | 5084138 | 2020-05-23 03:54:01 | 2020-05-23 05:46:56 | 2020-05-23 06:04:56 | 0:18:00 | 0:07:44 | 0:10:16 | smithi | master | centos | 8.1 | rados/dashboard/{clusters/{2-node-mgr.yaml} debug/mgr.yaml objectstore/bluestore-comp-zlib.yaml supported-random-distro$/{centos_8.yaml} tasks/dashboard.yaml} | 2 | |
Failure Reason:
Command failed on smithi162 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-mds -f --cluster ceph -i a' |
||||||||||||||
pass | 5084139 | 2020-05-23 03:54:02 | 2020-05-23 05:49:06 | 2020-05-23 08:17:09 | 2:28:03 | 2:22:05 | 0:05:58 | smithi | master | rhel | 8.1 | rados/objectstore/{backends/filestore-idempotent-aio-journal.yaml supported-random-distro$/{rhel_8.yaml}} | 1 | |
pass | 5084140 | 2020-05-23 03:54:03 | 2020-05-23 05:49:06 | 2020-05-23 06:15:06 | 0:26:00 | 0:18:52 | 0:07:08 | smithi | master | centos | 8.1 | rados/valgrind-leaks/{1-start.yaml 2-inject-leak/none.yaml centos_latest.yaml} | 1 | |
pass | 5084141 | 2020-05-23 03:54:04 | 2020-05-23 05:49:06 | 2020-05-23 06:17:06 | 0:28:00 | 0:20:38 | 0:07:22 | smithi | master | centos | 8.1 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/fast.yaml msgr-failures/osd-delay.yaml rados.yaml recovery-overrides/{more-async-partial-recovery.yaml} supported-random-distro$/{centos_8.yaml} thrashers/minsize_recovery.yaml thrashosds-health.yaml workloads/ec-small-objects-overwrites.yaml} | 2 | |
fail | 5084142 | 2020-05-23 03:54:04 | 2020-05-23 05:50:57 | 2020-05-23 06:16:56 | 0:25:59 | 0:12:25 | 0:13:34 | smithi | master | ubuntu | 18.04 | rados/cephadm/smoke/{distro/ubuntu_18.04.yaml fixed-2.yaml start.yaml} | 2 | |
Failure Reason:
'/home/ubuntu/cephtest/archive/syslog/misc.log:2020-05-23T06:12:41.465181+00:00 smithi072 bash[10404]: debug 2020-05-23T06:12:41.461+0000 7fa8ce3a0700 -1 log_channel(cephadm) log [ERR] : cephadm exited with an error code: 1, stderr:INFO:cephadm:Deploy daemon prometheus.a ... ' in syslog |
||||||||||||||
pass | 5084143 | 2020-05-23 03:54:05 | 2020-05-23 05:50:57 | 2020-05-23 07:58:59 | 2:08:02 | 1:48:50 | 0:19:12 | smithi | master | rhel | 8.1 | rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/normal.yaml msgr-failures/few.yaml objectstore/filestore-xfs.yaml rados.yaml recovery-overrides/{more-partial-recovery.yaml} supported-random-distro$/{rhel_8.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/ec-radosbench.yaml} | 2 | |
pass | 5084144 | 2020-05-23 03:54:06 | 2020-05-23 05:50:57 | 2020-05-23 06:32:57 | 0:42:00 | 0:25:56 | 0:16:04 | smithi | master | rhel | 8.1 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{rhel_8.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/cache-pool-snaps-readproxy.yaml} | 2 | |
pass | 5084145 | 2020-05-23 03:54:07 | 2020-05-23 05:50:57 | 2020-05-23 06:08:56 | 0:17:59 | 0:11:26 | 0:06:33 | smithi | master | rhel | 8.1 | rados/singleton-nomsgr/{all/full-tiering.yaml rados.yaml supported-random-distro$/{rhel_8.yaml}} | 1 | |
fail | 5084146 | 2020-05-23 03:54:08 | 2020-05-23 05:50:57 | 2020-05-23 06:28:57 | 0:38:00 | 0:12:36 | 0:25:24 | smithi | master | ubuntu | 18.04 | rados/cephadm/smoke-roleless/{distro/ubuntu_18.04.yaml start.yaml} | 2 | |
Failure Reason:
'/home/ubuntu/cephtest/archive/syslog/misc.log:2020-05-23T06:20:45.068603+00:00 smithi069 bash[10385]: debug 2020-05-23T06:20:45.060+0000 7f8d15941700 -1 log_channel(cephadm) log [ERR] : cephadm exited with an error code: 1, stderr:INFO:cephadm:Deploy daemon alertmanager.smithi069 ... ' in syslog |
||||||||||||||
pass | 5084147 | 2020-05-23 03:54:09 | 2020-05-23 05:50:59 | 2020-05-23 07:25:00 | 1:34:01 | 1:25:48 | 0:08:13 | smithi | master | centos | 8.1 | rados/singleton/{all/thrash-backfill-full.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-hybrid.yaml rados.yaml supported-random-distro$/{centos_8.yaml}} | 2 | |
pass | 5084148 | 2020-05-23 03:54:09 | 2020-05-23 05:51:04 | 2020-05-23 06:39:05 | 0:48:01 | 0:22:22 | 0:25:39 | smithi | master | rhel | 8.1 | rados/multimon/{clusters/21.yaml msgr-failures/many.yaml msgr/async-v1only.yaml no_pools.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{rhel_8.yaml} tasks/mon_recovery.yaml} | 3 | |
pass | 5084149 | 2020-05-23 03:54:10 | 2020-05-23 05:53:01 | 2020-05-23 06:33:01 | 0:40:00 | 0:29:12 | 0:10:48 | smithi | master | centos | 7.6 | rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-install/nautilus.yaml backoff/normal.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/on.yaml distro$/{centos_7.6.yaml} msgr-failures/osd-delay.yaml rados.yaml thrashers/none.yaml thrashosds-health.yaml workloads/cache-snaps.yaml} | 3 | |
fail | 5084150 | 2020-05-23 03:54:11 | 2020-05-23 05:53:01 | 2020-05-23 06:11:00 | 0:17:59 | 0:11:34 | 0:06:25 | smithi | master | centos | 8.1 | rados/cephadm/workunits/{distro/centos_latest.yaml task/test_cephadm.yaml} | 1 | |
Failure Reason:
Command failed (workunit test cephadm/test_cephadm.sh) on smithi014 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=b3102246570f1a522971013abdaf7c83463c3d9a TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh' |
||||||||||||||
fail | 5084151 | 2020-05-23 03:54:12 | 2020-05-23 05:53:01 | 2020-05-23 06:13:00 | 0:19:59 | 0:07:37 | 0:12:22 | smithi | master | ubuntu | 18.04 | rados/mgr/{clusters/{2-node-mgr.yaml} debug/mgr.yaml objectstore/bluestore-hybrid.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/module_selftest.yaml} | 2 | |
Failure Reason:
Command failed on smithi102 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-mds -f --cluster ceph -i c' |
||||||||||||||
pass | 5084152 | 2020-05-23 03:54:13 | 2020-05-23 05:53:01 | 2020-05-23 06:31:01 | 0:38:00 | 0:12:00 | 0:26:00 | smithi | master | centos | 8.1 | rados/thrash-erasure-code-shec/{ceph.yaml clusters/{fixed-4.yaml openstack.yaml} msgr-failures/osd-delay.yaml objectstore/bluestore-hybrid.yaml rados.yaml recovery-overrides/{more-partial-recovery.yaml} supported-random-distro$/{centos_8.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/ec-rados-plugin=shec-k=4-m=3-c=2.yaml} | 4 | |
pass | 5084153 | 2020-05-23 03:54:14 | 2020-05-23 05:53:07 | 2020-05-23 06:41:08 | 0:48:01 | 0:27:35 | 0:20:26 | smithi | master | rhel | 8.1 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-async-partial-recovery.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/on.yaml msgr-failures/osd-delay.yaml msgr/async.yaml objectstore/bluestore-comp-lz4.yaml rados.yaml supported-random-distro$/{rhel_8.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/cache-pool-snaps.yaml} | 2 | |
pass | 5084154 | 2020-05-23 03:54:15 | 2020-05-23 05:54:47 | 2020-05-23 06:16:46 | 0:21:59 | 0:10:58 | 0:11:01 | smithi | master | ubuntu | 18.04 | rados/perf/{ceph.yaml objectstore/bluestore-bitmap.yaml openstack.yaml settings/optimized.yaml ubuntu_latest.yaml workloads/fio_4M_rand_rw.yaml} | 1 | |
pass | 5084155 | 2020-05-23 03:54:15 | 2020-05-23 05:54:51 | 2020-05-23 06:34:52 | 0:40:01 | 0:32:12 | 0:07:49 | smithi | master | rhel | 8.1 | rados/singleton/{all/thrash-eio.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{rhel_8.yaml}} | 2 | |
fail | 5084156 | 2020-05-23 03:54:16 | 2020-05-23 05:54:54 | 2020-05-23 06:36:54 | 0:42:00 | 0:28:37 | 0:13:23 | smithi | master | centos | 8.0 | rados/cephadm/with-work/{distro/centos_8.0.yaml fixed-2.yaml mode/packaged.yaml msgr/async.yaml start.yaml tasks/rados_api_tests.yaml} | 2 | |
Failure Reason:
'/home/ubuntu/cephtest/archive/syslog/misc.log:2020-05-23T06:20:50.184834+00:00 smithi146 bash[25416]: debug 2020-05-23T06:20:50.184+0000 7fdccbbd2700 -1 log_channel(cephadm) log [ERR] : cephadm exited with an error code: 1, stderr:INFO:cephadm:Deploy daemon prometheus.a ... ' in syslog |
||||||||||||||
pass | 5084157 | 2020-05-23 03:54:17 | 2020-05-23 05:54:57 | 2020-05-23 06:22:57 | 0:28:00 | 0:15:05 | 0:12:55 | smithi | master | rhel | 8.1 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-comp-zlib.yaml rados.yaml supported-random-distro$/{rhel_8.yaml} tasks/readwrite.yaml} | 2 | |
pass | 5084158 | 2020-05-23 03:54:18 | 2020-05-23 05:54:58 | 2020-05-23 06:32:58 | 0:38:00 | 0:10:51 | 0:27:09 | smithi | master | centos | 8.1 | rados/thrash-erasure-code-big/{ceph.yaml cluster/{12-osds.yaml openstack.yaml} msgr-failures/osd-delay.yaml objectstore/filestore-xfs.yaml rados.yaml recovery-overrides/{more-async-recovery.yaml} supported-random-distro$/{centos_8.yaml} thrashers/fastread.yaml thrashosds-health.yaml workloads/ec-rados-plugin=lrc-k=4-m=2-l=3.yaml} | 3 | |
fail | 5084159 | 2020-05-23 03:54:19 | 2020-05-23 05:55:10 | 2020-05-23 09:31:15 | 3:36:05 | 3:28:01 | 0:08:04 | smithi | master | centos | 8.1 | rados/verify/{centos_latest.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/none.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml tasks/rados_cls_all.yaml validater/lockdep.yaml} | 2 | |
Failure Reason:
Command failed (workunit test cls/test_cls_rbd.sh) on smithi132 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=b3102246570f1a522971013abdaf7c83463c3d9a TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_rbd.sh' |
||||||||||||||
fail | 5084160 | 2020-05-23 03:54:20 | 2020-05-23 05:57:14 | 2020-05-23 06:21:14 | 0:24:00 | 0:12:42 | 0:11:18 | smithi | master | ubuntu | 18.04 | rados/cephadm/smoke/{distro/ubuntu_18.04_podman.yaml fixed-2.yaml start.yaml} | 2 | |
Failure Reason:
'/home/ubuntu/cephtest/archive/syslog/misc.log:2020-05-23T06:17:02.872143+00:00 smithi098 bash[14938]: debug 2020-05-23T06:17:02.867+0000 7f66073e2700 -1 log_channel(cephadm) log [ERR] : cephadm exited with an error code: 1, stderr:INFO:cephadm:Deploy daemon prometheus.a ... ' in syslog |
||||||||||||||
pass | 5084161 | 2020-05-23 03:54:21 | 2020-05-23 05:57:14 | 2020-05-23 06:15:14 | 0:18:00 | 0:09:33 | 0:08:27 | smithi | master | centos | 8.1 | rados/singleton-nomsgr/{all/health-warnings.yaml rados.yaml supported-random-distro$/{centos_8.yaml}} | 1 | |
pass | 5084162 | 2020-05-23 03:54:21 | 2020-05-23 05:57:14 | 2020-05-23 06:27:14 | 0:30:00 | 0:19:45 | 0:10:15 | smithi | master | ubuntu | 18.04 | rados/singleton/{all/thrash-rados/{thrash-rados.yaml thrashosds-health.yaml} msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 2 | |
fail | 5084163 | 2020-05-23 03:54:22 | 2020-05-23 05:59:06 | 2020-05-23 06:21:05 | 0:21:59 | 0:12:21 | 0:09:38 | smithi | master | ubuntu | 18.04 | rados/cephadm/smoke-roleless/{distro/ubuntu_18.04_podman.yaml start.yaml} | 2 | |
Failure Reason:
'/home/ubuntu/cephtest/archive/syslog/misc.log:2020-05-23T06:14:46.889798+00:00 smithi137 bash[13585]: debug 2020-05-23T06:14:46.887+0000 7f8c84a5b700 -1 log_channel(cephadm) log [ERR] : cephadm exited with an error code: 1, stderr:INFO:cephadm:Deploy daemon alertmanager.smithi137 ... ' in syslog |
||||||||||||||
pass | 5084164 | 2020-05-23 03:54:23 | 2020-05-23 05:59:06 | 2020-05-23 06:43:06 | 0:44:00 | 0:27:02 | 0:16:58 | smithi | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/fastclose.yaml msgr/async-v1only.yaml objectstore/bluestore-comp-snappy.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/cache-snaps-balanced.yaml} | 2 | |
pass | 5084165 | 2020-05-23 03:54:24 | 2020-05-23 05:59:06 | 2020-05-23 06:43:06 | 0:44:00 | 0:30:12 | 0:13:48 | smithi | master | rhel | 8.1 | rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/osd-delay.yaml objectstore/bluestore-comp-zstd.yaml rados.yaml recovery-overrides/{more-async-partial-recovery.yaml} supported-random-distro$/{rhel_8.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} | 2 | |
pass | 5084166 | 2020-05-23 03:54:25 | 2020-05-23 05:59:06 | 2020-05-23 06:31:06 | 0:32:00 | 0:23:59 | 0:08:01 | smithi | master | centos | 8.1 | rados/standalone/{supported-random-distro$/{centos_8.yaml} workloads/misc.yaml} | 1 | |
pass | 5084167 | 2020-05-23 03:54:26 | 2020-05-23 06:00:36 | 2020-05-23 06:48:36 | 0:48:00 | 0:38:28 | 0:09:32 | smithi | master | rhel | 8.1 | rados/monthrash/{ceph.yaml clusters/3-mons.yaml msgr-failures/mon-delay.yaml msgr/async-v2only.yaml objectstore/bluestore-comp-lz4.yaml rados.yaml supported-random-distro$/{rhel_8.yaml} thrashers/many.yaml workloads/snaps-few-objects.yaml} | 2 | |
fail | 5084168 | 2020-05-23 03:54:27 | 2020-05-23 06:00:36 | 2020-05-23 06:28:36 | 0:28:00 | 0:19:39 | 0:08:21 | smithi | master | centos | 8.1 | rados/cephadm/with-work/{distro/centos_latest.yaml fixed-2.yaml mode/root.yaml msgr/async-v1only.yaml start.yaml tasks/rados_python.yaml} | 2 | |
Failure Reason:
'/home/ubuntu/cephtest/archive/syslog/misc.log:2020-05-23T06:21:30.744458+00:00 smithi157 bash[25070]: debug 2020-05-23T06:21:30.743+0000 7fe1bd055700 -1 log_channel(cephadm) log [ERR] : cephadm exited with an error code: 1, stderr:INFO:cephadm:Deploy daemon prometheus.a ... ' in syslog |
||||||||||||||
pass | 5084169 | 2020-05-23 03:54:28 | 2020-05-23 06:13:13 | 2020-05-23 06:37:12 | 0:23:59 | 0:14:27 | 0:09:32 | smithi | master | ubuntu | 18.04 | rados/perf/{ceph.yaml objectstore/bluestore-comp.yaml openstack.yaml settings/optimized.yaml ubuntu_latest.yaml workloads/fio_4M_rand_write.yaml} | 1 | |
fail | 5084170 | 2020-05-23 03:54:29 | 2020-05-23 06:13:13 | 2020-05-23 06:33:13 | 0:20:00 | 0:12:22 | 0:07:38 | smithi | master | centos | 8.1 | rados/singleton/{all/thrash_cache_writeback_proxy_none.yaml msgr-failures/many.yaml msgr/async-v2only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{centos_8.yaml}} | 2 | |
Failure Reason:
Command crashed: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 400000 --objects 10000 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 600 --op read 100 --op write 50 --op delete 50 --op copy_from 50 --op write_excl 50 --pool base' |
||||||||||||||
fail | 5084171 | 2020-05-23 03:54:29 | 2020-05-23 06:13:13 | 2020-05-23 06:27:12 | 0:13:59 | 0:07:31 | 0:06:28 | smithi | master | centos | 8.1 | rados/dashboard/{clusters/{2-node-mgr.yaml} debug/mgr.yaml objectstore/bluestore-comp-zstd.yaml supported-random-distro$/{centos_8.yaml} tasks/dashboard.yaml} | 2 | |
Failure Reason:
Command failed on smithi138 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-mds -f --cluster ceph -i b' |
||||||||||||||
pass | 5084172 | 2020-05-23 03:54:30 | 2020-05-23 06:13:13 | 2020-05-23 08:57:16 | 2:44:03 | 2:33:46 | 0:10:17 | smithi | master | ubuntu | 18.04 | rados/objectstore/{backends/filestore-idempotent.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |
pass | 5084173 | 2020-05-23 03:54:31 | 2020-05-23 06:13:13 | 2020-05-23 06:53:14 | 0:40:01 | 0:20:19 | 0:19:42 | smithi | master | ubuntu | 18.04 | rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/fast.yaml msgr-failures/osd-delay.yaml objectstore/bluestore-bitmap.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/ec-small-objects-balanced.yaml} | 2 | |
pass | 5084174 | 2020-05-23 03:54:32 | 2020-05-23 06:13:14 | 2020-05-23 06:27:13 | 0:13:59 | 0:04:38 | 0:09:21 | smithi | master | ubuntu | 18.04 | rados/cephadm/workunits/{distro/ubuntu_18.04_podman.yaml task/test_cephadm_repos.yaml} | 1 | |
pass | 5084175 | 2020-05-23 03:54:33 | 2020-05-23 06:15:10 | 2020-05-23 06:57:10 | 0:42:00 | 0:25:03 | 0:16:57 | smithi | master | centos | 8.1 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-async-partial-recovery.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/on.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-comp-zlib.yaml rados.yaml supported-random-distro$/{centos_8.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/cache-snaps.yaml} | 2 | |
pass | 5084176 | 2020-05-23 03:54:34 | 2020-05-23 06:15:10 | 2020-05-23 06:37:10 | 0:22:00 | 0:09:54 | 0:12:06 | smithi | master | rhel | 8.1 | rados/multimon/{clusters/3.yaml msgr-failures/few.yaml msgr/async-v2only.yaml no_pools.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{rhel_8.yaml} tasks/mon_clock_no_skews.yaml} | 2 | |
fail | 5084177 | 2020-05-23 03:54:35 | 2020-05-23 06:15:10 | 2020-05-23 06:47:10 | 0:32:00 | 0:12:46 | 0:19:14 | smithi | master | ubuntu | 18.04 | rados/cephadm/smoke/{distro/ubuntu_latest.yaml fixed-2.yaml start.yaml} | 2 | |
Failure Reason:
'/home/ubuntu/cephtest/archive/syslog/misc.log:2020-05-23T06:42:21.149616+00:00 smithi060 bash[10373]: debug 2020-05-23T06:42:21.145+0000 7f4171ec1700 -1 log_channel(cephadm) log [ERR] : cephadm exited with an error code: 1, stderr:INFO:cephadm:Deploy daemon prometheus.a ... ' in syslog |
||||||||||||||
fail | 5084178 | 2020-05-23 03:54:35 | 2020-05-23 06:15:10 | 2020-05-23 06:43:10 | 0:28:00 | 0:07:36 | 0:20:24 | smithi | master | centos | 8.1 | rados/mgr/{clusters/{2-node-mgr.yaml} debug/mgr.yaml objectstore/bluestore-low-osd-mem-target.yaml supported-random-distro$/{centos_8.yaml} tasks/progress.yaml} | 2 | |
Failure Reason:
Command failed on smithi194 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-mds -f --cluster ceph -i b' |
||||||||||||||
pass | 5084179 | 2020-05-23 03:54:36 | 2020-05-23 06:15:15 | 2020-05-23 08:49:18 | 2:34:03 | 1:48:00 | 0:46:03 | smithi | master | centos | 7.6 | rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-install/octopus.yaml backoff/peering.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/crush-compat.yaml distro$/{centos_7.6.yaml} msgr-failures/fastclose.yaml rados.yaml thrashers/pggrow.yaml thrashosds-health.yaml workloads/radosbench.yaml} | 3 | |
pass | 5084180 | 2020-05-23 03:54:37 | 2020-05-23 06:16:31 | 2020-05-23 06:36:30 | 0:19:59 | 0:08:21 | 0:11:38 | smithi | master | ubuntu | 18.04 | rados/singleton-nomsgr/{all/large-omap-object-warnings.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |
pass | 5084181 | 2020-05-23 03:54:38 | 2020-05-23 06:16:45 | 2020-05-23 06:44:45 | 0:28:00 | 0:16:06 | 0:11:54 | smithi | master | centos | 8.1 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/many.yaml msgr/async-v1only.yaml objectstore/bluestore-comp-zstd.yaml rados.yaml supported-random-distro$/{centos_8.yaml} tasks/repair_test.yaml} | 2 | |
pass | 5084182 | 2020-05-23 03:54:39 | 2020-05-23 06:16:48 | 2020-05-23 06:36:47 | 0:19:59 | 0:08:22 | 0:11:37 | smithi | master | centos | 8.1 | rados/singleton/{all/watch-notify-same-primary.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{centos_8.yaml}} | 1 | |
pass | 5084183 | 2020-05-23 03:54:40 | 2020-05-23 06:16:55 | 2020-05-23 06:54:55 | 0:38:00 | 0:29:23 | 0:08:37 | smithi | master | centos | 8.1 | rados/singleton-bluestore/{all/cephtool.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-comp-lz4.yaml rados.yaml supported-random-distro$/{centos_8.yaml}} | 1 | |
pass | 5084184 | 2020-05-23 03:54:41 | 2020-05-23 06:16:58 | 2020-05-23 06:58:58 | 0:42:00 | 0:28:21 | 0:13:39 | smithi | master | centos | 8.1 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/normal.yaml msgr-failures/fastclose.yaml rados.yaml recovery-overrides/{more-async-recovery.yaml} supported-random-distro$/{centos_8.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/ec-snaps-few-objects-overwrites.yaml} | 2 |