User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail |
---|---|---|---|---|---|---|---|---|---|---|
yuriw | 2020-05-23 15:15:01 | 2020-05-23 15:25:04 | 2020-05-23 20:54:59 | 5:29:55 | rados | wip-yuri-master_5.22.20 | smithi | c3321b7 | 15 | 53 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 5085497 | 2020-05-23 15:15:12 | 2020-05-23 15:23:10 | 2020-05-23 15:49:09 | 0:25:59 | 0:12:30 | 0:13:29 | smithi | py2 | ubuntu | 18.04 | rados/cephadm/smoke-roleless/{distro/ubuntu_latest.yaml start.yaml} | 2 | |
Failure Reason:
'/home/ubuntu/cephtest/archive/syslog/misc.log:2020-05-23T15:40:31.051237+00:00 smithi185 bash[10059]: debug 2020-05-23T15:40:31.045+0000 7f2740b78700 -1 log_channel(cephadm) log [ERR] : cephadm exited with an error code: 1, stderr:INFO:cephadm:Deploy daemon alertmanager.smithi185 ... ' in syslog |
||||||||||||||
pass | 5085498 | 2020-05-23 15:15:13 | 2020-05-23 15:23:10 | 2020-05-23 16:07:10 | 0:44:00 | 0:15:12 | 0:28:48 | smithi | py2 | centos | 8.1 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-comp-zlib.yaml rados.yaml supported-random-distro$/{centos_8.yaml} tasks/rados_cls_all.yaml} | 2 | |
fail | 5085499 | 2020-05-23 15:15:14 | 2020-05-23 15:23:10 | 2020-05-23 16:23:10 | 1:00:00 | 0:33:04 | 0:26:56 | smithi | py2 | rhel | 8.0 | rados/cephadm/with-work/{distro/rhel_8.0.yaml fixed-2.yaml mode/packaged.yaml msgr/async-v1only.yaml start.yaml tasks/rados_api_tests.yaml} | 2 | |
Failure Reason:
'/home/ubuntu/cephtest/archive/syslog/misc.log:2020-05-23T16:08:35.315877+00:00 smithi028 bash[30641]: debug 2020-05-23T16:08:35.314+0000 7f78aeab1700 -1 log_channel(cephadm) log [ERR] : cephadm exited with an error code: 1, stderr:INFO:cephadm:Deploy daemon prometheus.a ... ' in syslog |
||||||||||||||
pass | 5085500 | 2020-05-23 15:15:14 | 2020-05-23 15:24:51 | 2020-05-23 20:54:59 | 5:30:08 | 3:49:30 | 1:40:38 | smithi | py2 | ubuntu | 18.04 | rados/upgrade/nautilus-x-singleton/{0-cluster/{openstack.yaml start.yaml} 1-install/nautilus.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml} 5-workload/{radosbench.yaml rbd_api.yaml} 6-finish-upgrade.yaml 7-octopus.yaml 8-workload/{rbd-python.yaml snaps-many-objects.yaml} bluestore-bitmap.yaml thrashosds-health.yaml ubuntu_latest.yaml} | 4 | |
fail | 5085501 | 2020-05-23 15:15:15 | 2020-05-23 15:24:51 | 2020-05-23 16:12:52 | 0:48:01 | 0:23:56 | 0:24:05 | smithi | py2 | centos | 7.6 | rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-install/nautilus-v2only.yaml backoff/peering.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/on.yaml distro$/{centos_7.6.yaml} msgr-failures/few.yaml rados.yaml thrashers/default.yaml thrashosds-health.yaml workloads/snaps-few-objects.yaml} | 3 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
fail | 5085502 | 2020-05-23 15:15:16 | 2020-05-23 15:25:04 | 2020-05-23 15:49:04 | 0:24:00 | 0:13:00 | 0:11:00 | smithi | py2 | centos | 7.6 | rados/cephadm/smoke/{distro/centos_7.yaml fixed-2.yaml start.yaml} | 2 | |
Failure Reason:
'/home/ubuntu/cephtest/archive/syslog/misc.log:2020-05-23T15:46:06.913381+00:00 smithi114 bash: debug 2020-05-23T15:46:06.911+0000 7fc91613a700 -1 log_channel(cephadm) log [ERR] : cephadm exited with an error code: 1, stderr:INFO:cephadm:Deploy daemon prometheus.a ... ' in syslog |
||||||||||||||
fail | 5085503 | 2020-05-23 15:15:17 | 2020-05-23 15:27:01 | 2020-05-23 15:53:01 | 0:26:00 | 0:12:53 | 0:13:07 | smithi | py2 | centos | 7.6 | rados/cephadm/smoke-roleless/{distro/centos_7.yaml start.yaml} | 2 | |
Failure Reason:
'/home/ubuntu/cephtest/archive/syslog/misc.log:2020-05-23T15:45:11.873367+00:00 smithi179 bash: debug 2020-05-23T15:45:11.871+0000 7f1ee650f700 -1 log_channel(cephadm) log [ERR] : cephadm exited with an error code: 1, stderr:INFO:cephadm:Deploy daemon alertmanager.smithi179 ... ' in syslog |
||||||||||||||
fail | 5085504 | 2020-05-23 15:15:18 | 2020-05-23 15:27:01 | 2020-05-23 15:45:01 | 0:18:00 | 0:03:06 | 0:14:54 | smithi | py2 | ubuntu | 18.04 | rados/cephadm/upgrade/{1-start.yaml 2-start-upgrade.yaml 3-wait.yaml distro$/{ubuntu_18.04_podman.yaml} fixed-2.yaml} | 2 | |
Failure Reason:
Command failed on smithi184 with status 5: 'sudo systemctl stop ceph-1bf7f52a-9d0c-11ea-a06a-001a4aab830c@mon.a' |
||||||||||||||
fail | 5085505 | 2020-05-23 15:15:19 | 2020-05-23 15:27:01 | 2020-05-23 15:51:01 | 0:24:00 | 0:11:30 | 0:12:30 | smithi | py2 | centos | 8.1 | rados/cephadm/workunits/{distro/centos_latest.yaml task/test_adoption.yaml} | 1 | |
Failure Reason:
Command failed (workunit test cephadm/test_adoption.sh) on smithi040 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=c3321b7686f181e1bcb805a1fb24baced390ae4c TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_adoption.sh' |
||||||||||||||
fail | 5085506 | 2020-05-23 15:15:19 | 2020-05-23 15:27:01 | 2020-05-23 16:23:02 | 0:56:01 | 0:46:49 | 0:09:12 | smithi | py2 | centos | 8.1 | rados/verify/{centos_latest.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/none.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-comp-snappy.yaml rados.yaml tasks/rados_api_tests.yaml validater/valgrind.yaml} | 2 | |
Failure Reason:
saw valgrind issues |
||||||||||||||
pass | 5085507 | 2020-05-23 15:15:20 | 2020-05-23 15:28:46 | 2020-05-23 16:10:47 | 0:42:01 | 0:21:41 | 0:20:20 | smithi | py2 | centos | 7.6 | rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-install/nautilus.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/crush-compat.yaml distro$/{centos_7.6.yaml} msgr-failures/osd-delay.yaml rados.yaml thrashers/mapgap.yaml thrashosds-health.yaml workloads/test_rbd_api.yaml} | 3 | |
fail | 5085508 | 2020-05-23 15:15:21 | 2020-05-23 15:29:29 | 2020-05-23 15:59:28 | 0:29:59 | 0:22:14 | 0:07:45 | smithi | py2 | rhel | 8.1 | rados/cephadm/with-work/{distro/rhel_latest.yaml fixed-2.yaml mode/root.yaml msgr/async-v2only.yaml start.yaml tasks/rados_python.yaml} | 2 | |
Failure Reason:
'/home/ubuntu/cephtest/archive/syslog/misc.log:2020-05-23T15:51:40.594157+00:00 smithi077 bash[25618]: debug 2020-05-23T15:51:40.592+0000 7fbc3a5e0700 -1 log_channel(cephadm) log [ERR] : cephadm exited with an error code: 1, stderr:INFO:cephadm:Deploy daemon prometheus.a ... ' in syslog |
||||||||||||||
fail | 5085509 | 2020-05-23 15:15:22 | 2020-05-23 15:31:00 | 2020-05-23 16:13:01 | 0:42:01 | 0:22:23 | 0:19:38 | smithi | py2 | centos | 8.0 | rados/cephadm/smoke-roleless/{distro/centos_8.0.yaml start.yaml} | 2 | |
Failure Reason:
'/home/ubuntu/cephtest/archive/syslog/misc.log:2020-05-23T15:58:05.664364+00:00 smithi193 bash[21910]: debug 2020-05-23T15:58:05.662+0000 7f835015d700 -1 log_channel(cephadm) log [ERR] : cephadm exited with an error code: 1, stderr:INFO:cephadm:Deploy daemon alertmanager.smithi193 ... ' in syslog |
||||||||||||||
fail | 5085510 | 2020-05-23 15:15:23 | 2020-05-23 15:31:01 | 2020-05-23 15:51:00 | 0:19:59 | 0:07:42 | 0:12:17 | smithi | py2 | ubuntu | 18.04 | rados/cephadm/workunits/{distro/ubuntu_18.04_podman.yaml task/test_cephadm.yaml} | 1 | |
Failure Reason:
Command failed (workunit test cephadm/test_cephadm.sh) on smithi095 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=c3321b7686f181e1bcb805a1fb24baced390ae4c TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh' |
||||||||||||||
pass | 5085511 | 2020-05-23 15:15:24 | 2020-05-23 15:32:43 | 2020-05-23 16:10:43 | 0:38:00 | 0:15:35 | 0:22:25 | smithi | py2 | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/on.yaml msgr-failures/osd-delay.yaml msgr/async.yaml objectstore/bluestore-hybrid.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/set-chunks-read.yaml} | 2 | |
fail | 5085512 | 2020-05-23 15:15:25 | 2020-05-23 15:32:43 | 2020-05-23 16:16:43 | 0:44:00 | 0:30:57 | 0:13:03 | smithi | py2 | centos | 8.1 | rados/cephadm/smoke-roleless/{distro/centos_latest.yaml start.yaml} | 2 | |
Failure Reason:
timeout expired in wait_until_healthy |
||||||||||||||
fail | 5085513 | 2020-05-23 15:15:25 | 2020-05-23 15:33:11 | 2020-05-23 16:35:12 | 1:02:01 | 0:31:38 | 0:30:23 | smithi | py2 | ubuntu | 18.04 | rados/cephadm/with-work/{distro/ubuntu_18.04.yaml fixed-2.yaml mode/packaged.yaml msgr/async.yaml start.yaml tasks/rados_api_tests.yaml} | 2 | |
Failure Reason:
'/home/ubuntu/cephtest/archive/syslog/misc.log:2020-05-23T16:14:31.481827+00:00 smithi005 bash[13073]: debug 2020-05-23T16:14:31.477+0000 7fdf30d0e700 -1 log_channel(cephadm) log [ERR] : cephadm exited with an error code: 1, stderr:INFO:cephadm:Deploy daemon prometheus.a ... ' in syslog |
||||||||||||||
pass | 5085514 | 2020-05-23 15:15:26 | 2020-05-23 15:34:59 | 2020-05-23 15:58:58 | 0:23:59 | 0:16:08 | 0:07:51 | smithi | py2 | rhel | 7.7 | rados/cephadm/smoke/{distro/rhel_7.yaml fixed-2.yaml start.yaml} | 2 | |
fail | 5085515 | 2020-05-23 15:15:27 | 2020-05-23 15:34:59 | 2020-05-23 16:02:59 | 0:28:00 | 0:17:40 | 0:10:20 | smithi | py2 | rhel | 7.7 | rados/cephadm/smoke-roleless/{distro/rhel_7.yaml start.yaml} | 2 | |
Failure Reason:
'/home/ubuntu/cephtest/archive/syslog/misc.log:2020-05-23T15:57:49.189514+00:00 smithi098 bash: debug 2020-05-23T15:57:49.188+0000 7fb747304700 -1 log_channel(cephadm) log [ERR] : cephadm exited with an error code: 1, stderr:INFO:cephadm:Deploy daemon alertmanager.smithi098 ... ' in syslog |
||||||||||||||
pass | 5085516 | 2020-05-23 15:15:28 | 2020-05-23 15:34:59 | 2020-05-23 16:37:00 | 1:02:01 | 0:53:08 | 0:08:53 | smithi | py2 | centos | 8.1 | rados/dashboard/{clusters/{2-node-mgr.yaml} debug/mgr.yaml objectstore/bluestore-comp-snappy.yaml supported-random-distro$/{centos_8.yaml} tasks/dashboard.yaml} | 2 | |
pass | 5085517 | 2020-05-23 15:15:29 | 2020-05-23 15:36:45 | 2020-05-23 17:02:47 | 1:26:02 | 1:08:51 | 0:17:11 | smithi | py2 | centos | 7.6 | rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-install/luminous-v1only.yaml backoff/peering.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/crush-compat.yaml distro$/{centos_7.6.yaml} msgr-failures/few.yaml rados.yaml thrashers/none.yaml thrashosds-health.yaml workloads/radosbench.yaml} | 3 | |
fail | 5085518 | 2020-05-23 15:15:30 | 2020-05-23 15:36:46 | 2020-05-23 16:12:46 | 0:36:00 | 0:22:06 | 0:13:54 | smithi | py2 | ubuntu | 18.04 | rados/cephadm/with-work/{distro/ubuntu_18.04_podman.yaml fixed-2.yaml mode/root.yaml msgr/async-v1only.yaml start.yaml tasks/rados_python.yaml} | 2 | |
Failure Reason:
'/home/ubuntu/cephtest/archive/syslog/misc.log:2020-05-23T16:02:00.691868+00:00 smithi072 bash[17602]: cephadm 2020-05-23T16:01:59.421953+0000 mgr.y (mgr.14142) 197 : cephadm [ERR] cephadm exited with an error code: 1, stderr:INFO:cephadm:Deploy daemon prometheus.a ... ' in syslog |
||||||||||||||
pass | 5085519 | 2020-05-23 15:15:31 | 2020-05-23 15:38:47 | 2020-05-23 16:00:47 | 0:22:00 | 0:15:30 | 0:06:30 | smithi | py2 | rhel | 8.0 | rados/cephadm/smoke/{distro/rhel_8.0.yaml fixed-2.yaml start.yaml} | 2 | |
fail | 5085520 | 2020-05-23 15:15:32 | 2020-05-23 15:38:47 | 2020-05-23 16:26:47 | 0:48:00 | 0:36:25 | 0:11:35 | smithi | py2 | rhel | 8.0 | rados/cephadm/smoke-roleless/{distro/rhel_8.0.yaml start.yaml} | 2 | |
Failure Reason:
timeout expired in wait_until_healthy |
||||||||||||||
fail | 5085521 | 2020-05-23 15:15:33 | 2020-05-23 15:40:42 | 2020-05-23 15:58:41 | 0:17:59 | 0:07:41 | 0:10:18 | smithi | py2 | ubuntu | 18.04 | rados/cephadm/workunits/{distro/ubuntu_18.04_podman.yaml task/test_adoption.yaml} | 1 | |
Failure Reason:
Command failed (workunit test cephadm/test_adoption.sh) on smithi073 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=c3321b7686f181e1bcb805a1fb24baced390ae4c TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_adoption.sh' |
||||||||||||||
pass | 5085522 | 2020-05-23 15:15:33 | 2020-05-23 15:40:42 | 2020-05-23 16:28:42 | 0:48:00 | 0:38:08 | 0:09:52 | smithi | py2 | centos | 8.1 | rados/monthrash/{ceph.yaml clusters/9-mons.yaml msgr-failures/mon-delay.yaml msgr/async-v1only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{centos_8.yaml} thrashers/force-sync-many.yaml workloads/rados_mon_osdmap_prune.yaml} | 2 | |
fail | 5085523 | 2020-05-23 15:15:34 | 2020-05-23 15:40:42 | 2020-05-23 16:30:42 | 0:50:00 | 0:40:47 | 0:09:13 | smithi | py2 | rhel | 8.1 | rados/cephadm/smoke-roleless/{distro/rhel_latest.yaml start.yaml} | 2 | |
Failure Reason:
'/home/ubuntu/cephtest/archive/syslog/misc.log:2020-05-23T15:59:09.279107+00:00 smithi154 bash[21849]: debug 2020-05-23T15:59:09.278+0000 7f7649da8700 -1 log_channel(cephadm) log [ERR] : cephadm exited with an error code: 1, stderr:INFO:cephadm:Deploy daemon alertmanager.smithi154 ... ' in syslog |
||||||||||||||
fail | 5085524 | 2020-05-23 15:15:35 | 2020-05-23 15:40:42 | 2020-05-23 16:28:42 | 0:48:00 | 0:28:19 | 0:19:41 | smithi | py2 | centos | 8.1 | rados/cephadm/with-work/{distro/centos_latest.yaml fixed-2.yaml mode/root.yaml msgr/async-v2only.yaml start.yaml tasks/rados_api_tests.yaml} | 2 | |
Failure Reason:
'/home/ubuntu/cephtest/archive/syslog/misc.log:2020-05-23T16:12:59.695934+00:00 smithi160 bash[24734]: cephadm 2020-05-23T16:12:58.194175+0000 mgr.y (mgr.14142) 337 : cephadm [ERR] cephadm exited with an error code: 1, stderr:INFO:cephadm:Deploy daemon prometheus.a ... ' in syslog |
||||||||||||||
fail | 5085525 | 2020-05-23 15:15:36 | 2020-05-23 15:40:43 | 2020-05-23 16:06:43 | 0:26:00 | 0:12:19 | 0:13:41 | smithi | py2 | ubuntu | 18.04 | rados/cephadm/smoke/{distro/ubuntu_18.04.yaml fixed-2.yaml start.yaml} | 2 | |
Failure Reason:
'/home/ubuntu/cephtest/archive/syslog/misc.log:2020-05-23T16:02:38.837794+00:00 smithi085 bash[10169]: debug 2020-05-23T16:02:38.832+0000 7fe5d2881700 -1 log_channel(cephadm) log [ERR] : cephadm exited with an error code: 1, stderr:INFO:cephadm:Deploy daemon prometheus.a ... ' in syslog |
||||||||||||||
fail | 5085526 | 2020-05-23 15:15:37 | 2020-05-23 15:40:44 | 2020-05-23 16:06:43 | 0:25:59 | 0:12:28 | 0:13:31 | smithi | py2 | ubuntu | 18.04 | rados/cephadm/smoke-roleless/{distro/ubuntu_18.04.yaml start.yaml} | 2 | |
Failure Reason:
'/home/ubuntu/cephtest/archive/syslog/misc.log:2020-05-23T15:58:09.791353+00:00 smithi081 bash[10081]: debug 2020-05-23T15:58:09.789+0000 7ffb5ca81700 -1 log_channel(cephadm) log [ERR] : cephadm exited with an error code: 1, stderr:INFO:cephadm:Deploy daemon alertmanager.smithi081 ... ' in syslog |
||||||||||||||
fail | 5085527 | 2020-05-23 15:15:38 | 2020-05-23 15:40:45 | 2020-05-23 16:00:45 | 0:20:00 | 0:11:19 | 0:08:41 | smithi | py2 | centos | 8.1 | rados/cephadm/workunits/{distro/centos_latest.yaml task/test_cephadm.yaml} | 1 | |
Failure Reason:
Command failed (workunit test cephadm/test_cephadm.sh) on smithi103 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=c3321b7686f181e1bcb805a1fb24baced390ae4c TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh' |
||||||||||||||
pass | 5085528 | 2020-05-23 15:15:39 | 2020-05-23 15:43:18 | 2020-05-23 17:05:19 | 1:22:01 | 0:41:06 | 0:40:55 | smithi | py2 | centos | 7.6 | rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-install/mimic-v1only.yaml backoff/normal.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/crush-compat.yaml distro$/{centos_7.6.yaml} msgr-failures/fastclose.yaml rados.yaml thrashers/careful.yaml thrashosds-health.yaml workloads/snaps-few-objects.yaml} | 3 | |
pass | 5085529 | 2020-05-23 15:15:39 | 2020-05-23 15:44:18 | 2020-05-23 16:16:17 | 0:31:59 | 0:20:03 | 0:11:56 | smithi | py2 | ubuntu | 18.04 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/many.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/rados_workunit_loadgen_mix.yaml} | 2 | |
fail | 5085530 | 2020-05-23 15:15:40 | 2020-05-23 15:44:35 | 2020-05-23 16:16:34 | 0:31:59 | 0:13:04 | 0:18:55 | smithi | py2 | ubuntu | 18.04 | rados/cephadm/smoke/{distro/ubuntu_18.04_podman.yaml fixed-2.yaml start.yaml} | 2 | |
Failure Reason:
'/home/ubuntu/cephtest/archive/syslog/misc.log:2020-05-23T16:14:13.786839+00:00 smithi025 bash[14732]: debug 2020-05-23T16:14:13.785+0000 7f9361c1f700 -1 log_channel(cephadm) log [ERR] : cephadm exited with an error code: 1, stderr:INFO:cephadm:Deploy daemon prometheus.a ... ' in syslog |
||||||||||||||
fail | 5085531 | 2020-05-23 15:15:41 | 2020-05-23 15:44:35 | 2020-05-23 16:22:34 | 0:37:59 | 0:12:27 | 0:25:32 | smithi | py2 | ubuntu | 18.04 | rados/cephadm/smoke-roleless/{distro/ubuntu_18.04_podman.yaml start.yaml} | 2 | |
Failure Reason:
'/home/ubuntu/cephtest/archive/syslog/misc.log:2020-05-23T16:14:40.958274+00:00 smithi061 bash[13276]: debug 2020-05-23T16:14:40.957+0000 7f359ebca700 -1 log_channel(cephadm) log [ERR] : cephadm exited with an error code: 1, stderr:INFO:cephadm:Deploy daemon alertmanager.smithi061 ... ' in syslog |
||||||||||||||
fail | 5085532 | 2020-05-23 15:15:42 | 2020-05-23 15:44:36 | 2020-05-23 16:16:36 | 0:32:00 | 0:24:56 | 0:07:04 | smithi | py2 | rhel | 8.0 | rados/cephadm/with-work/{distro/rhel_8.0.yaml fixed-2.yaml mode/packaged.yaml msgr/async.yaml start.yaml tasks/rados_python.yaml} | 2 | |
Failure Reason:
'/home/ubuntu/cephtest/archive/syslog/misc.log:2020-05-23T16:09:28.591208+00:00 smithi003 bash[30675]: debug 2020-05-23T16:09:28.590+0000 7fc212a97700 -1 log_channel(cephadm) log [ERR] : cephadm exited with an error code: 1, stderr:INFO:cephadm:Deploy daemon prometheus.a ... ' in syslog |
||||||||||||||
pass | 5085533 | 2020-05-23 15:15:43 | 2020-05-23 15:44:40 | 2020-05-23 17:32:41 | 1:48:01 | 1:34:45 | 0:13:16 | smithi | py2 | rhel | 8.1 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-partial-recovery.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/osd-delay.yaml msgr/async.yaml objectstore/bluestore-hybrid.yaml rados.yaml supported-random-distro$/{rhel_8.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/radosbench.yaml} | 2 | |
fail | 5085534 | 2020-05-23 15:15:44 | 2020-05-23 15:44:45 | 2020-05-23 16:22:46 | 0:38:01 | 0:12:37 | 0:25:24 | smithi | py2 | ubuntu | 18.04 | rados/cephadm/smoke/{distro/ubuntu_latest.yaml fixed-2.yaml start.yaml} | 2 | |
Failure Reason:
'/home/ubuntu/cephtest/archive/syslog/misc.log:2020-05-23T16:19:33.033811+00:00 smithi068 bash[10033]: debug 2020-05-23T16:19:33.028+0000 7f47591be700 -1 log_channel(cephadm) log [ERR] : cephadm exited with an error code: 1, stderr:INFO:cephadm:Deploy daemon prometheus.a ... ' in syslog |
||||||||||||||
fail | 5085535 | 2020-05-23 15:15:45 | 2020-05-23 15:44:45 | 2020-05-23 16:24:46 | 0:40:01 | 0:12:33 | 0:27:28 | smithi | py2 | ubuntu | 18.04 | rados/cephadm/smoke-roleless/{distro/ubuntu_latest.yaml start.yaml} | 2 | |
Failure Reason:
'/home/ubuntu/cephtest/archive/syslog/misc.log:2020-05-23T16:15:30.830030+00:00 smithi115 bash[10093]: debug 2020-05-23T16:15:30.827+0000 7f4451159700 -1 log_channel(cephadm) log [ERR] : cephadm exited with an error code: 1, stderr:INFO:cephadm:Deploy daemon alertmanager.smithi115 ... ' in syslog |
||||||||||||||
fail | 5085536 | 2020-05-23 15:15:46 | 2020-05-23 15:44:49 | 2020-05-23 16:24:49 | 0:40:00 | 0:30:51 | 0:09:09 | smithi | py2 | rhel | 8.1 | rados/cephadm/with-work/{distro/rhel_latest.yaml fixed-2.yaml mode/root.yaml msgr/async-v1only.yaml start.yaml tasks/rados_api_tests.yaml} | 2 | |
Failure Reason:
'/home/ubuntu/cephtest/archive/syslog/misc.log:2020-05-23T16:10:09.752266+00:00 smithi200 bash[25623]: debug 2020-05-23T16:10:09.751+0000 7fc9817d7700 -1 log_channel(cephadm) log [ERR] : cephadm exited with an error code: 1, stderr:INFO:cephadm:Deploy daemon prometheus.a ... ' in syslog |
||||||||||||||
fail | 5085537 | 2020-05-23 15:15:46 | 2020-05-23 15:45:01 | 2020-05-23 16:11:01 | 0:26:00 | 0:13:01 | 0:12:59 | smithi | py2 | centos | 7.6 | rados/cephadm/smoke/{distro/centos_7.yaml fixed-2.yaml start.yaml} | 2 | |
Failure Reason:
'/home/ubuntu/cephtest/archive/syslog/misc.log:2020-05-23T16:07:59.580245+00:00 smithi114 bash: debug 2020-05-23T16:07:59.579+0000 7f542357b700 -1 log_channel(cephadm) log [ERR] : cephadm exited with an error code: 1, stderr:INFO:cephadm:Deploy daemon prometheus.a ... ' in syslog |
||||||||||||||
fail | 5085538 | 2020-05-23 15:15:47 | 2020-05-23 15:45:02 | 2020-05-23 16:11:02 | 0:26:00 | 0:13:12 | 0:12:48 | smithi | py2 | centos | 7.6 | rados/cephadm/smoke-roleless/{distro/centos_7.yaml start.yaml} | 2 | |
Failure Reason:
'/home/ubuntu/cephtest/archive/syslog/misc.log:2020-05-23T16:03:02.531910+00:00 smithi059 bash: debug 2020-05-23T16:03:02.528+0000 7fc91514c700 -1 log_channel(cephadm) log [ERR] : cephadm exited with an error code: 1, stderr:INFO:cephadm:Deploy daemon alertmanager.smithi059 ... ' in syslog |
||||||||||||||
fail | 5085539 | 2020-05-23 15:15:48 | 2020-05-23 15:46:47 | 2020-05-23 16:22:47 | 0:36:00 | 0:23:40 | 0:12:20 | smithi | py2 | centos | 7.6 | rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-install/nautilus-v1only.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/crush-compat.yaml distro$/{centos_7.6.yaml} msgr-failures/osd-delay.yaml rados.yaml thrashers/mapgap.yaml thrashosds-health.yaml workloads/cache-snaps.yaml} | 3 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
fail | 5085540 | 2020-05-23 15:15:49 | 2020-05-23 15:46:47 | 2020-05-23 16:10:47 | 0:24:00 | 0:10:33 | 0:13:27 | smithi | py2 | rhel | 8.0 | rados/cephadm/upgrade/{1-start.yaml 2-start-upgrade.yaml 3-wait.yaml distro$/{rhel_8.0.yaml} fixed-2.yaml} | 2 | |
Failure Reason:
Command failed on smithi136 with status 5: 'sudo systemctl stop ceph-42ffefee-9d0f-11ea-a06a-001a4aab830c@mon.a' |
||||||||||||||
fail | 5085541 | 2020-05-23 15:15:50 | 2020-05-23 15:46:49 | 2020-05-23 16:04:48 | 0:17:59 | 0:11:36 | 0:06:23 | smithi | py2 | centos | 8.1 | rados/cephadm/workunits/{distro/centos_latest.yaml task/test_adoption.yaml} | 1 | |
Failure Reason:
Command failed (workunit test cephadm/test_adoption.sh) on smithi018 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=c3321b7686f181e1bcb805a1fb24baced390ae4c TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_adoption.sh' |
||||||||||||||
fail | 5085542 | 2020-05-23 15:15:51 | 2020-05-23 15:47:36 | 2020-05-23 16:47:36 | 1:00:00 | 0:39:44 | 0:20:16 | smithi | py2 | centos | 8.0 | rados/cephadm/smoke-roleless/{distro/centos_8.0.yaml start.yaml} | 2 | |
Failure Reason:
'/home/ubuntu/cephtest/archive/syslog/misc.log:2020-05-23T16:15:02.269070+00:00 smithi082 bash[21897]: debug 2020-05-23T16:15:02.267+0000 7f3effb19700 -1 log_channel(cephadm) log [ERR] : cephadm exited with an error code: 1, stderr:INFO:cephadm:Deploy daemon alertmanager.smithi082 ... ' in syslog |
||||||||||||||
pass | 5085543 | 2020-05-23 15:15:52 | 2020-05-23 15:48:42 | 2020-05-23 17:38:44 | 1:50:02 | 1:35:54 | 0:14:08 | smithi | py2 | centos | 7.6 | rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-install/nautilus-v2only.yaml backoff/normal.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/on.yaml distro$/{centos_7.6.yaml} msgr-failures/fastclose.yaml rados.yaml thrashers/morepggrow.yaml thrashosds-health.yaml workloads/radosbench.yaml} | 3 | |
fail | 5085544 | 2020-05-23 15:15:52 | 2020-05-23 15:48:45 | 2020-05-23 16:04:45 | 0:16:00 | 0:07:43 | 0:08:17 | smithi | py2 | ubuntu | 18.04 | rados/cephadm/workunits/{distro/ubuntu_18.04_podman.yaml task/test_cephadm.yaml} | 1 | |
Failure Reason:
Command failed (workunit test cephadm/test_cephadm.sh) on smithi095 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=c3321b7686f181e1bcb805a1fb24baced390ae4c TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh' |
||||||||||||||
fail | 5085545 | 2020-05-23 15:15:53 | 2020-05-23 15:49:05 | 2020-05-23 16:45:05 | 0:56:00 | 0:46:56 | 0:09:04 | smithi | py2 | centos | 8.1 | rados/verify/{centos_latest.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/none.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml tasks/rados_api_tests.yaml validater/valgrind.yaml} | 2 | |
Failure Reason:
saw valgrind issues |
||||||||||||||
fail | 5085546 | 2020-05-23 15:15:54 | 2020-05-23 15:49:10 | 2020-05-23 16:37:10 | 0:48:00 | 0:39:46 | 0:08:14 | smithi | py2 | centos | 8.1 | rados/cephadm/smoke-roleless/{distro/centos_latest.yaml start.yaml} | 2 | |
Failure Reason:
'/home/ubuntu/cephtest/archive/syslog/misc.log:2020-05-23T16:04:35.488834+00:00 smithi093 bash[21606]: debug 2020-05-23T16:04:35.487+0000 7f5a9d33a700 -1 log_channel(cephadm) log [ERR] : cephadm exited with an error code: 1, stderr:INFO:cephadm:Deploy daemon alertmanager.smithi093 ... ' in syslog |
||||||||||||||
pass | 5085547 | 2020-05-23 15:15:55 | 2020-05-23 15:51:04 | 2020-05-23 16:17:04 | 0:26:00 | 0:16:43 | 0:09:17 | smithi | py2 | rhel | 7.7 | rados/cephadm/smoke/{distro/rhel_7.yaml fixed-2.yaml start.yaml} | 2 | |
fail | 5085548 | 2020-05-23 15:15:56 | 2020-05-23 15:51:04 | 2020-05-23 16:39:04 | 0:48:00 | 0:34:56 | 0:13:04 | smithi | py2 | rhel | 7.7 | rados/cephadm/smoke-roleless/{distro/rhel_7.yaml start.yaml} | 2 | |
Failure Reason:
timeout expired in wait_until_healthy |
||||||||||||||
fail | 5085549 | 2020-05-23 15:15:57 | 2020-05-23 15:51:04 | 2020-05-23 20:25:10 | 4:34:06 | 4:29:10 | 0:04:56 | smithi | py2 | rhel | 8.1 | rados/standalone/{supported-random-distro$/{rhel_8.yaml} workloads/osd.yaml} | 1 | |
Failure Reason:
Command failed (workunit test osd/osd-bench.sh) on smithi040 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=c3321b7686f181e1bcb805a1fb24baced390ae4c TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/osd/osd-bench.sh' |
||||||||||||||
fail | 5085550 | 2020-05-23 15:15:58 | 2020-05-23 15:52:46 | 2020-05-23 16:12:45 | 0:19:59 | 0:13:43 | 0:06:16 | smithi | py2 | rhel | 8.1 | rados/singleton/{all/test_envlibrados_for_rocksdb.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{rhel_8.yaml}} | 1 | |
Failure Reason:
Command failed (workunit test rados/test_envlibrados_for_rocksdb.sh) on smithi175 with status 2: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=c3321b7686f181e1bcb805a1fb24baced390ae4c TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_envlibrados_for_rocksdb.sh' |
||||||||||||||
pass | 5085551 | 2020-05-23 15:15:58 | 2020-05-23 15:52:59 | 2020-05-23 16:26:59 | 0:34:00 | 0:15:08 | 0:18:52 | smithi | py2 | rhel | 8.0 | rados/cephadm/smoke/{distro/rhel_8.0.yaml fixed-2.yaml start.yaml} | 2 | |
fail | 5085552 | 2020-05-23 15:15:59 | 2020-05-23 15:53:02 | 2020-05-23 16:29:02 | 0:36:00 | 0:29:42 | 0:06:18 | smithi | py2 | rhel | 8.0 | rados/cephadm/smoke-roleless/{distro/rhel_8.0.yaml start.yaml} | 2 | |
Failure Reason:
timeout expired in wait_until_healthy |
||||||||||||||
fail | 5085553 | 2020-05-23 15:16:00 | 2020-05-23 15:54:45 | 2020-05-23 16:12:45 | 0:18:00 | 0:07:53 | 0:10:07 | smithi | py2 | ubuntu | 18.04 | rados/cephadm/workunits/{distro/ubuntu_18.04_podman.yaml task/test_adoption.yaml} | 1 | |
Failure Reason:
Command failed (workunit test cephadm/test_adoption.sh) on smithi066 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=c3321b7686f181e1bcb805a1fb24baced390ae4c TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_adoption.sh' |
||||||||||||||
fail | 5085554 | 2020-05-23 15:16:01 | 2020-05-23 15:54:45 | 2020-05-23 16:40:46 | 0:46:01 | 0:38:31 | 0:07:30 | smithi | py2 | rhel | 8.1 | rados/cephadm/smoke-roleless/{distro/rhel_latest.yaml start.yaml} | 2 | |
Failure Reason:
'/home/ubuntu/cephtest/archive/syslog/misc.log:2020-05-23T16:10:28.962670+00:00 smithi069 bash[21884]: debug 2020-05-23T16:10:28.961+0000 7fd44a270700 -1 log_channel(cephadm) log [ERR] : cephadm exited with an error code: 1, stderr:INFO:cephadm:Deploy daemon alertmanager.smithi069 ... ' in syslog |
||||||||||||||
fail | 5085555 | 2020-05-23 15:16:02 | 2020-05-23 15:54:46 | 2020-05-23 16:38:46 | 0:44:00 | 0:26:26 | 0:17:34 | smithi | py2 | centos | 7.6 | rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-install/luminous-v1only.yaml backoff/normal.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/crush-compat.yaml distro$/{centos_7.6.yaml} msgr-failures/fastclose.yaml rados.yaml thrashers/pggrow.yaml thrashosds-health.yaml workloads/test_rbd_api.yaml} | 3 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
fail | 5085556 | 2020-05-23 15:16:02 | 2020-05-23 15:54:46 | 2020-05-23 16:22:45 | 0:27:59 | 0:12:33 | 0:15:26 | smithi | py2 | ubuntu | 18.04 | rados/cephadm/smoke/{distro/ubuntu_18.04.yaml fixed-2.yaml start.yaml} | 2 | |
Failure Reason:
'/home/ubuntu/cephtest/archive/syslog/misc.log:2020-05-23T16:17:56.008380+00:00 smithi132 bash[10200]: debug 2020-05-23T16:17:56.003+0000 7f0200e84700 -1 log_channel(cephadm) log [ERR] : cephadm exited with an error code: 1, stderr:INFO:cephadm:Deploy daemon prometheus.a ... ' in syslog |
||||||||||||||
fail | 5085557 | 2020-05-23 15:16:03 | 2020-05-23 15:55:00 | 2020-05-23 16:57:01 | 1:02:01 | 0:52:11 | 0:09:50 | smithi | py2 | centos | 8.1 | rados/standalone/{supported-random-distro$/{centos_8.yaml} workloads/scrub.yaml} | 1 | |
Failure Reason:
Command failed (workunit test scrub/osd-scrub-repair.sh) on smithi073 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=c3321b7686f181e1bcb805a1fb24baced390ae4c TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/scrub/osd-scrub-repair.sh' |
||||||||||||||
fail | 5085558 | 2020-05-23 15:16:04 | 2020-05-23 15:55:01 | 2020-05-23 16:33:01 | 0:38:00 | 0:12:34 | 0:25:26 | smithi | py2 | ubuntu | 18.04 | rados/cephadm/smoke-roleless/{distro/ubuntu_18.04.yaml start.yaml} | 2 | |
Failure Reason:
'/home/ubuntu/cephtest/archive/syslog/misc.log:2020-05-23T16:23:55.415552+00:00 smithi101 bash[10073]: debug 2020-05-23T16:23:55.411+0000 7ffa4248d700 -1 log_channel(cephadm) log [ERR] : cephadm exited with an error code: 1, stderr:INFO:cephadm:Deploy daemon alertmanager.smithi101 ... ' in syslog |
||||||||||||||
fail | 5085559 | 2020-05-23 15:16:05 | 2020-05-23 15:55:12 | 2020-05-23 16:15:11 | 0:19:59 | 0:11:17 | 0:08:42 | smithi | py2 | centos | 8.1 | rados/cephadm/workunits/{distro/centos_latest.yaml task/test_cephadm.yaml} | 1 | |
Failure Reason:
Command failed (workunit test cephadm/test_cephadm.sh) on smithi196 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=c3321b7686f181e1bcb805a1fb24baced390ae4c TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh' |
||||||||||||||
fail | 5085560 | 2020-05-23 15:16:06 | 2020-05-23 15:57:08 | 2020-05-23 18:29:11 | 2:32:03 | 2:07:10 | 0:24:53 | smithi | py2 | rhel | 8.1 | rados/objectstore/{backends/objectstore.yaml supported-random-distro$/{rhel_8.yaml}} | 1 | |
Failure Reason:
Command crashed: 'sudo TESTDIR=/home/ubuntu/cephtest bash -c \'mkdir $TESTDIR/archive/ostest && cd $TESTDIR/archive/ostest && ulimit -Sn 16384 && CEPH_ARGS="--no-log-to-stderr --log-file $TESTDIR/archive/ceph_test_objectstore.log --debug-filestore 20 --debug-bluestore 20" ceph_test_objectstore --gtest_filter=-*/3 --gtest_catch_exceptions=0\'' |
||||||||||||||
fail | 5085561 | 2020-05-23 15:16:07 | 2020-05-23 15:58:59 | 2020-05-23 16:22:58 | 0:23:59 | 0:13:00 | 0:10:59 | smithi | py2 | ubuntu | 18.04 | rados/cephadm/smoke/{distro/ubuntu_18.04_podman.yaml fixed-2.yaml start.yaml} | 2 | |
Failure Reason:
'/home/ubuntu/cephtest/archive/syslog/misc.log:2020-05-23T16:19:09.853767+00:00 smithi026 bash[14249]: cephadm 2020-05-23T16:19:08.119180+0000 mgr.y (mgr.14140) 202 : cephadm [ERR] cephadm exited with an error code: 1, stderr:INFO:cephadm:Deploy daemon prometheus.a ... ' in syslog |
||||||||||||||
fail | 5085562 | 2020-05-23 15:16:08 | 2020-05-23 15:59:00 | 2020-05-23 16:24:59 | 0:25:59 | 0:12:41 | 0:13:18 | smithi | py2 | ubuntu | 18.04 | rados/cephadm/smoke-roleless/{distro/ubuntu_18.04_podman.yaml start.yaml} | 2 | |
Failure Reason:
'/home/ubuntu/cephtest/archive/syslog/misc.log:2020-05-23T16:18:29.270466+00:00 smithi057 bash[13149]: debug 2020-05-23T16:18:29.266+0000 7fe96b127700 -1 log_channel(cephadm) log [ERR] : cephadm exited with an error code: 1, stderr:INFO:cephadm:Deploy daemon alertmanager.smithi057 ... ' in syslog |
||||||||||||||
fail | 5085563 | 2020-05-23 15:16:09 | 2020-05-23 15:59:29 | 2020-05-23 16:25:29 | 0:26:00 | 0:12:40 | 0:13:20 | smithi | py2 | ubuntu | 18.04 | rados/cephadm/smoke/{distro/ubuntu_latest.yaml fixed-2.yaml start.yaml} | 2 | |
Failure Reason:
'/home/ubuntu/cephtest/archive/syslog/misc.log:2020-05-23T16:20:15.463136+00:00 smithi131 bash[10127]: debug 2020-05-23T16:20:15.457+0000 7f1cd9b53700 -1 log_channel(cephadm) log [ERR] : cephadm exited with an error code: 1, stderr:INFO:cephadm:Deploy daemon prometheus.a ... ' in syslog |
||||||||||||||
fail | 5085564 | 2020-05-23 15:16:09 | 2020-05-23 16:00:47 | 2020-05-23 16:24:47 | 0:24:00 | 0:12:32 | 0:11:28 | smithi | py2 | ubuntu | 18.04 | rados/cephadm/smoke-roleless/{distro/ubuntu_latest.yaml start.yaml} | 2 | |
Failure Reason:
'/home/ubuntu/cephtest/archive/syslog/misc.log:2020-05-23T16:15:19.649471+00:00 smithi156 bash[10130]: debug 2020-05-23T16:15:19.647+0000 7fe124219700 -1 log_channel(cephadm) log [ERR] : cephadm exited with an error code: 1, stderr:INFO:cephadm:Deploy daemon alertmanager.smithi156 ... ' in syslog |