User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
ideepika | 2021-06-09 16:53:54 | 2021-06-09 17:49:10 | 2021-06-15 06:43:09 | 5 days, 12:53:59 | rados | wip-yuri7-testing-2021-06-08-0747-octopus | smithi | 8d06216 | 30 | 28 | 2 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 6162974 | 2021-06-09 16:55:03 | 2021-06-09 17:48:50 | 2021-06-09 18:18:53 | 0:30:03 | 0:18:37 | 0:11:26 | smithi | master | centos | 8.2 | rados/cephadm/upgrade/{1-start-distro/1-start-centos_8 2-repo_digest/defaut 3-start-upgrade 4-wait fixed-2} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
fail | 6162976 | 2021-06-09 16:55:04 | 2021-06-09 17:49:10 | 2021-06-09 18:02:28 | 0:13:18 | 0:03:35 | 0:09:43 | smithi | master | ubuntu | 20.04 | rados/cephadm/workunits/{0-distro/ubuntu_20.04_kubic_stable task/test_orch_cli} | 1 | |
Failure Reason:
Command failed on smithi064 with status 1: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'sudo cp /etc/containers/registries.conf /etc/containers/registries.conf.backup'" |
||||||||||||||
pass | 6162978 | 2021-06-09 16:55:05 | 2021-06-09 17:49:11 | 2021-06-09 18:14:46 | 0:25:35 | 0:10:27 | 0:15:08 | smithi | master | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{default} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/upmap msgr-failures/fastclose msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{ubuntu_latest} thrashers/none thrashosds-health workloads/cache-agent-small} | 2 | |
fail | 6162980 | 2021-06-09 16:55:06 | 2021-06-09 17:53:12 | 2021-06-09 18:09:35 | 0:16:23 | 0:03:45 | 0:12:38 | smithi | master | ubuntu | 20.04 | rados/cephadm/with-work/{0-distro/rhel_8.3_kubic_stable distro/ubuntu_20.04 fixed-2 mode/root msgr/async-v1only start tasks/rados_python} | 2 | |
Failure Reason:
Command failed on smithi032 with status 1: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'sudo cp /etc/containers/registries.conf /etc/containers/registries.conf.backup'" |
||||||||||||||
pass | 6162982 | 2021-06-09 16:55:07 | 2021-06-09 17:54:03 | 2021-06-09 18:25:33 | 0:31:30 | 0:20:08 | 0:11:22 | smithi | master | ubuntu | 20.04 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/fast msgr-failures/osd-delay rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/ec-small-objects-fast-read-overwrites} | 2 | |
pass | 6162984 | 2021-06-09 16:55:08 | 2021-06-09 17:54:03 | 2021-06-09 18:23:54 | 0:29:51 | 0:18:32 | 0:11:19 | smithi | master | ubuntu | 20.04 | rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04 2-repo_digest/repo_digest 3-start-upgrade 4-wait fixed-2} | 2 | |
pass | 6162985 | 2021-06-09 16:55:09 | 2021-06-09 17:54:33 | 2021-06-09 18:26:24 | 0:31:51 | 0:21:07 | 0:10:44 | smithi | master | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat msgr-failures/few msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/cache-pool-snaps-readproxy} | 2 | |
pass | 6162986 | 2021-06-09 16:55:10 | 2021-06-09 17:55:04 | 2021-06-09 18:26:36 | 0:31:32 | 0:21:19 | 0:10:13 | smithi | master | ubuntu | 20.04 | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast msgr-failures/few objectstore/bluestore-comp-zlib rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/minsize_recovery thrashosds-health workloads/ec-small-objects-many-deletes} | 2 | |
fail | 6162987 | 2021-06-09 16:55:11 | 2021-06-09 17:55:06 | 2021-06-09 17:57:05 | 0:01:59 | 0 | smithi | master | ubuntu | 20.04 | rados/cephadm/thrash/0-distro/ubuntu_20.04_kubic_stable | — | ||
Failure Reason:
list index out of range |
||||||||||||||
pass | 6162988 | 2021-06-09 16:55:12 | 2021-06-09 17:55:04 | 2021-06-09 18:17:13 | 0:22:09 | 0:11:20 | 0:10:49 | smithi | master | ubuntu | 20.04 | rados/mgr/{clusters/{2-node-mgr} debug/mgr objectstore/bluestore-comp-snappy supported-random-distro$/{ubuntu_latest} tasks/failover} | 2 | |
fail | 6162989 | 2021-06-09 16:55:13 | 2021-06-09 17:55:24 | 2021-06-09 18:31:44 | 0:36:20 | 0:24:43 | 0:11:37 | smithi | master | ubuntu | 20.04 | rados/dashboard/{clusters/{2-node-mgr} debug/mgr objectstore/bluestore-stupid supported-random-distro$/{ubuntu_latest} tasks/dashboard} | 2 | |
Failure Reason:
Test failure: test_host_devices (tasks.mgr.dashboard.test_host.HostControllerTest) |
||||||||||||||
fail | 6162990 | 2021-06-09 16:55:14 | 2021-06-09 17:56:14 | 2021-06-09 18:09:36 | 0:13:22 | 0:03:35 | 0:09:47 | smithi | master | ubuntu | 20.04 | rados/cephadm/workunits/{0-distro/ubuntu_20.04_kubic_testing task/test_adoption} | 1 | |
Failure Reason:
Command failed on smithi078 with status 1: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'sudo cp /etc/containers/registries.conf /etc/containers/registries.conf.backup'" |
||||||||||||||
pass | 6162991 | 2021-06-09 16:55:15 | 2021-06-09 17:56:15 | 2021-06-09 18:19:46 | 0:23:31 | 0:13:19 | 0:10:12 | smithi | master | ubuntu | 20.04 | rados/basic/{ceph clusters/{fixed-2 openstack} msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest} tasks/rados_cls_all} | 2 | |
pass | 6162992 | 2021-06-09 16:55:16 | 2021-06-09 17:56:15 | 2021-06-09 18:29:33 | 0:33:18 | 0:21:55 | 0:11:23 | smithi | master | ubuntu | 20.04 | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} msgr-failures/few objectstore/bluestore-low-osd-mem-target rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/none thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 | |
pass | 6162993 | 2021-06-09 16:55:17 | 2021-06-09 17:56:15 | 2021-06-09 18:25:19 | 0:29:04 | 0:18:42 | 0:10:22 | smithi | master | ubuntu | 20.04 | rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04 2-repo_digest/defaut 3-start-upgrade 4-wait fixed-2} | 2 | |
pass | 6162994 | 2021-06-09 16:55:18 | 2021-06-09 17:56:15 | 2021-06-09 18:26:32 | 0:30:17 | 0:19:25 | 0:10:52 | smithi | master | ubuntu | 20.04 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/normal msgr-failures/fastclose rados recovery-overrides/{more-active-recovery} supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/ec-small-objects-overwrites} | 2 | |
pass | 6162995 | 2021-06-09 16:55:19 | 2021-06-09 17:56:26 | 2021-06-09 18:19:26 | 0:23:00 | 0:12:38 | 0:10:22 | smithi | master | ubuntu | 20.04 | rados/basic/{ceph clusters/{fixed-2 openstack} msgr-failures/few msgr/async objectstore/bluestore-comp-zlib rados supported-random-distro$/{ubuntu_latest} tasks/rados_python} | 2 | |
fail | 6162996 | 2021-06-09 16:55:20 | 2021-06-09 17:56:26 | 2021-06-09 18:33:34 | 0:37:08 | 0:24:39 | 0:12:29 | smithi | master | ubuntu | 20.04 | rados/dashboard/{clusters/{2-node-mgr} debug/mgr objectstore/filestore-xfs supported-random-distro$/{ubuntu_latest} tasks/dashboard} | 2 | |
Failure Reason:
Test failure: test_host_devices (tasks.mgr.dashboard.test_host.HostControllerTest) |
||||||||||||||
fail | 6162997 | 2021-06-09 16:55:21 | 2021-06-09 17:58:08 | 2021-06-09 18:00:08 | 0:02:00 | 0 | smithi | master | ubuntu | 20.04 | rados/cephadm/thrash/0-distro/ubuntu_20.04_kubic_testing | — | ||
Failure Reason:
list index out of range |
||||||||||||||
fail | 6162998 | 2021-06-09 16:55:22 | 2021-06-09 17:58:06 | 2021-06-09 18:11:15 | 0:13:09 | 0:03:35 | 0:09:34 | smithi | master | ubuntu | 20.04 | rados/cephadm/workunits/{0-distro/ubuntu_20.04_kubic_stable task/test_cephadm_repos} | 1 | |
Failure Reason:
Command failed on smithi008 with status 1: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'sudo cp /etc/containers/registries.conf /etc/containers/registries.conf.backup'" |
||||||||||||||
pass | 6163000 | 2021-06-09 16:55:23 | 2021-06-09 17:58:07 | 2021-06-09 18:21:59 | 0:23:52 | 0:10:50 | 0:13:02 | smithi | master | ubuntu | 20.04 | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} msgr-failures/fastclose objectstore/bluestore-comp-lz4 rados recovery-overrides/{more-async-recovery} supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 | |
fail | 6163002 | 2021-06-09 16:55:24 | 2021-06-09 17:58:27 | 2021-06-09 18:29:57 | 0:31:30 | 0:21:18 | 0:10:12 | smithi | master | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-active-recovery} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/off msgr-failures/osd-delay msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/rados_api_tests} | 2 | |
Failure Reason:
Command failed (workunit test rados/test.sh) on smithi061 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=bd325e75d2e6ae84104545e86d1104e7d5750c52 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh' |
||||||||||||||
fail | 6163004 | 2021-06-09 16:55:25 | 2021-06-09 17:58:27 | 2021-06-09 18:26:24 | 0:27:57 | 0:17:33 | 0:10:24 | smithi | master | centos | 8.2 | rados/cephadm/upgrade/{1-start-distro/1-start-centos_8 2-repo_digest/repo_digest 3-start-upgrade 4-wait fixed-2} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6163006 | 2021-06-09 16:55:26 | 2021-06-09 17:58:37 | 2021-06-09 18:32:27 | 0:33:50 | 0:23:43 | 0:10:07 | smithi | master | ubuntu | 20.04 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/fast msgr-failures/few rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/ec-snaps-few-objects-overwrites} | 2 | |
fail | 6163008 | 2021-06-09 16:55:27 | 2021-06-09 17:58:48 | 2021-06-09 18:12:31 | 0:13:43 | 0:03:38 | 0:10:05 | smithi | master | ubuntu | 20.04 | rados/cephadm/workunits/{0-distro/ubuntu_20.04_kubic_testing task/test_orch_cli} | 1 | |
Failure Reason:
Command failed on smithi165 with status 1: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'sudo cp /etc/containers/registries.conf /etc/containers/registries.conf.backup'" |
||||||||||||||
pass | 6163011 | 2021-06-09 16:55:29 | 2021-06-09 17:58:48 | 2021-06-09 18:17:25 | 0:18:37 | 0:09:23 | 0:09:14 | smithi | master | ubuntu | 20.04 | rados/perf/{ceph objectstore/bluestore-low-osd-mem-target openstack settings/optimized ubuntu_latest workloads/radosbench_4M_write} | 1 | |
pass | 6163014 | 2021-06-09 16:55:30 | 2021-06-09 17:58:48 | 2021-06-09 18:16:18 | 0:17:30 | 0:07:29 | 0:10:01 | smithi | master | ubuntu | 20.04 | rados/objectstore/{backends/alloc-hint supported-random-distro$/{ubuntu_latest}} | 1 | |
fail | 6163016 | 2021-06-09 16:55:31 | 2021-06-09 17:58:58 | 2021-06-09 18:16:19 | 0:17:21 | 0:04:08 | 0:13:13 | smithi | master | ubuntu | 20.04 | rados/upgrade/mimic-x-singleton/{0-cluster/{openstack start} 1-install/mimic 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{rbd-cls rbd-import-export readwrite snaps-few-objects} 5-workload/{radosbench rbd_api} 6-finish-upgrade 7-octopus 8-workload/{rbd-python snaps-many-objects} bluestore-bitmap thrashosds-health ubuntu_latest} | 4 | |
Failure Reason:
Failed to fetch package version from https://shaman.ceph.com/api/search/?status=ready&project=ceph&flavor=default&distros=ubuntu%2F20.04%2Fx86_64&ref=mimic |
||||||||||||||
pass | 6163018 | 2021-06-09 16:55:32 | 2021-06-09 17:58:59 | 2021-06-09 18:36:51 | 0:37:52 | 0:26:55 | 0:10:57 | smithi | master | ubuntu | 20.04 | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} msgr-failures/fastclose objectstore/filestore-xfs rados recovery-overrides/{more-async-recovery} supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 | |
fail | 6163020 | 2021-06-09 16:55:33 | 2021-06-09 17:59:11 | 2021-06-09 18:01:10 | 0:01:59 | 0 | smithi | master | centos | 8.2 | rados/cephadm/dashboard/0-distro/centos_8.2_kubic_stable | — | ||
Failure Reason:
list index out of range |
||||||||||||||
pass | 6163022 | 2021-06-09 16:55:34 | 2021-06-09 17:59:09 | 2021-06-09 18:16:36 | 0:17:27 | 0:07:38 | 0:09:49 | smithi | master | ubuntu | 20.04 | rados/basic/{ceph clusters/{fixed-2 openstack} msgr-failures/few msgr/async-v2only objectstore/bluestore-hybrid rados supported-random-distro$/{ubuntu_latest} tasks/rados_striper} | 2 | |
fail | 6163024 | 2021-06-09 16:55:35 | 2021-06-09 17:59:09 | 2021-06-09 18:52:40 | 0:53:31 | 0:43:07 | 0:10:24 | smithi | master | ubuntu | 20.04 | rados/singleton/{all/mon-config-keys msgr-failures/many msgr/async-v1only objectstore/bluestore-stupid rados supported-random-distro$/{ubuntu_latest}} | 1 | |
Failure Reason:
Command crashed: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=bd325e75d2e6ae84104545e86d1104e7d5750c52 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/mon/test_mon_config_key.py' |
||||||||||||||
fail | 6163026 | 2021-06-09 16:55:36 | 2021-06-09 17:59:21 | 2021-06-09 18:01:21 | 0:02:00 | 0 | smithi | master | centos | 8.2 | rados/cephadm/thrash/0-distro/centos_8.2_kubic_stable | — | ||
Failure Reason:
list index out of range |
||||||||||||||
pass | 6163028 | 2021-06-09 16:55:37 | 2021-06-09 17:59:20 | 2021-06-09 18:24:07 | 0:24:47 | 0:10:31 | 0:14:16 | smithi | master | ubuntu | 20.04 | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} msgr-failures/osd-delay objectstore/bluestore-comp-zlib rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 | |
fail | 6163030 | 2021-06-09 16:55:38 | 2021-06-09 18:00:00 | 2021-06-09 18:27:08 | 0:27:08 | 0:16:52 | 0:10:16 | smithi | master | centos | 8.2 | rados/cephadm/upgrade/{1-start-distro/1-start-centos_8 2-repo_digest/defaut 3-start-upgrade 4-wait fixed-2} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6163032 | 2021-06-09 16:55:39 | 2021-06-09 18:00:10 | 2021-06-09 18:34:13 | 0:34:03 | 0:23:27 | 0:10:36 | smithi | master | ubuntu | 18.04 | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast msgr-failures/osd-delay objectstore/bluestore-stupid rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{ubuntu_18.04} thrashers/default thrashosds-health workloads/ec-rados-plugin=jerasure-k=3-m=1} | 2 | |
fail | 6163034 | 2021-06-09 16:55:40 | 2021-06-09 18:00:30 | 2021-06-09 18:15:41 | 0:15:11 | 0:03:33 | 0:11:38 | smithi | master | ubuntu | 20.04 | rados/cephadm/workunits/{0-distro/ubuntu_20.04_kubic_stable task/test_cephadm} | 1 | |
Failure Reason:
Command failed on smithi091 with status 1: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'sudo cp /etc/containers/registries.conf /etc/containers/registries.conf.backup'" |
||||||||||||||
pass | 6163036 | 2021-06-09 16:55:40 | 2021-06-09 18:00:31 | 2021-06-09 18:23:10 | 0:22:39 | 0:12:45 | 0:09:54 | smithi | master | ubuntu | 20.04 | rados/singleton/{all/osd-recovery msgr-failures/none msgr/async-v2only objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 6163038 | 2021-06-09 16:55:41 | 2021-06-09 18:01:31 | 2021-06-09 18:39:44 | 0:38:13 | 0:29:43 | 0:08:30 | smithi | master | rhel | 8.3 | rados/basic/{ceph clusters/{fixed-2 openstack} msgr-failures/few msgr/async-v1only objectstore/bluestore-stupid rados supported-random-distro$/{rhel_8} tasks/rados_workunit_loadgen_mix} | 2 | |
pass | 6163040 | 2021-06-09 16:55:42 | 2021-06-09 18:02:31 | 2021-06-09 18:36:34 | 0:34:03 | 0:18:34 | 0:15:29 | smithi | master | ubuntu | 20.04 | rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04 2-repo_digest/repo_digest 3-start-upgrade 4-wait fixed-2} | 2 | |
dead | 6163042 | 2021-06-09 16:55:43 | 2021-06-09 18:07:12 | 2021-06-10 06:16:17 | 12:09:05 | smithi | master | ubuntu | 20.04 | rados/perf/{ceph objectstore/bluestore-bitmap openstack settings/optimized ubuntu_latest workloads/cosbench_64K_read_write} | 1 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 6163044 | 2021-06-09 16:55:44 | 2021-06-09 18:07:13 | 2021-06-09 18:43:35 | 0:36:22 | 0:24:57 | 0:11:25 | smithi | master | ubuntu | 20.04 | rados/dashboard/{clusters/{2-node-mgr} debug/mgr objectstore/bluestore-comp-snappy supported-random-distro$/{ubuntu_latest} tasks/dashboard} | 2 | |
Failure Reason:
Test failure: test_host_devices (tasks.mgr.dashboard.test_host.HostControllerTest) |
||||||||||||||
pass | 6163046 | 2021-06-09 16:55:45 | 2021-06-09 18:08:03 | 2021-06-09 18:34:44 | 0:26:41 | 0:20:00 | 0:06:41 | smithi | master | rhel | 8.3 | rados/singleton/{all/pg-autoscaler-progress-off msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-zstd rados supported-random-distro$/{rhel_8}} | 2 | |
dead | 6163048 | 2021-06-09 16:55:46 | 2021-06-09 18:08:13 | 2021-06-10 06:16:54 | 12:08:41 | smithi | master | ubuntu | 20.04 | rados/perf/{ceph objectstore/bluestore-comp openstack settings/optimized ubuntu_latest workloads/cosbench_64K_write} | 1 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 6163050 | 2021-06-09 16:55:47 | 2021-06-09 18:08:16 | 2021-06-09 18:10:15 | 0:01:59 | 0 | smithi | master | ubuntu | 20.04 | rados/cephadm/thrash/0-distro/ubuntu_20.04_kubic_stable | — | ||
Failure Reason:
list index out of range |
||||||||||||||
fail | 6163052 | 2021-06-09 16:55:48 | 2021-06-09 18:08:14 | 2021-06-09 18:21:25 | 0:13:11 | 0:03:36 | 0:09:35 | smithi | master | ubuntu | 20.04 | rados/cephadm/workunits/{0-distro/ubuntu_20.04_kubic_testing task/test_cephadm_repos} | 1 | |
Failure Reason:
Command failed on smithi093 with status 1: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'sudo cp /etc/containers/registries.conf /etc/containers/registries.conf.backup'" |
||||||||||||||
pass | 6163054 | 2021-06-09 16:55:49 | 2021-06-09 18:08:14 | 2021-06-09 18:39:09 | 0:30:55 | 0:18:18 | 0:12:37 | smithi | master | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{default} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/upmap msgr-failures/fastclose msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{ubuntu_latest} thrashers/none thrashosds-health workloads/small-objects} | 2 | |
pass | 6163056 | 2021-06-09 16:55:50 | 2021-06-09 18:09:44 | 2021-06-09 18:39:51 | 0:30:07 | 0:19:06 | 0:11:01 | smithi | master | ubuntu | 20.04 | rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04 2-repo_digest/defaut 3-start-upgrade 4-wait fixed-2} | 2 | |
fail | 6163058 | 2021-06-09 16:55:51 | 2021-06-09 18:10:37 | 2021-06-09 18:12:36 | 0:01:59 | 0 | smithi | master | ubuntu | 20.04 | rados/cephadm/thrash/0-distro/ubuntu_20.04_kubic_testing | — | ||
Failure Reason:
list index out of range |
||||||||||||||
fail | 6163060 | 2021-06-09 16:55:52 | 2021-06-09 18:10:35 | 2021-06-09 18:23:54 | 0:13:19 | 0:03:36 | 0:09:43 | smithi | master | ubuntu | 20.04 | rados/cephadm/workunits/{0-distro/ubuntu_20.04_kubic_stable task/test_adoption} | 1 | |
Failure Reason:
Command failed on smithi081 with status 1: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'sudo cp /etc/containers/registries.conf /etc/containers/registries.conf.backup'" |
||||||||||||||
pass | 6163062 | 2021-06-09 16:55:53 | 2021-06-09 18:10:35 | 2021-06-09 18:34:17 | 0:23:42 | 0:12:40 | 0:11:02 | smithi | master | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-partial-recovery} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/crush-compat msgr-failures/few msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_latest} thrashers/mapgap thrashosds-health workloads/write_fadvise_dontneed} | 2 | |
fail | 6163064 | 2021-06-09 16:55:54 | 2021-06-09 18:11:17 | 2021-06-09 18:39:22 | 0:28:05 | 0:17:20 | 0:10:45 | smithi | master | centos | 8.2 | rados/cephadm/upgrade/{1-start-distro/1-start-centos_8 2-repo_digest/repo_digest 3-start-upgrade 4-wait fixed-2} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6163066 | 2021-06-09 16:55:56 | 2021-06-09 18:12:17 | 2021-06-15 06:43:09 | 5 days, 12:30:52 | 0:13:54 | 5 days, 12:16:58 | smithi | master | ubuntu | 20.04 | rados/cephadm/workunits/{0-distro/ubuntu_20.04_kubic_testing task/test_cephadm} | 1 | |
Failure Reason:
Command failed on smithi104 with status 1: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'sudo cp /etc/containers/registries.conf /etc/containers/registries.conf.backup'" |
||||||||||||||
fail | 6163069 | 2021-06-09 16:55:57 | 2021-06-09 18:12:17 | 2021-06-09 18:30:37 | 0:18:20 | 0:04:14 | 0:14:06 | smithi | master | ubuntu | 20.04 | rados/upgrade/nautilus-x-singleton/{0-cluster/{openstack start} 1-install/nautilus 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{rbd-cls rbd-import-export readwrite snaps-few-objects} 5-workload/{radosbench rbd_api} 6-finish-upgrade 7-octopus 8-workload/{rbd-python snaps-many-objects} bluestore-bitmap thrashosds-health ubuntu_latest} | 4 | |
Failure Reason:
Failed to fetch package version from https://shaman.ceph.com/api/search/?status=ready&project=ceph&flavor=default&distros=ubuntu%2F20.04%2Fx86_64&ref=nautilus |
||||||||||||||
fail | 6163071 | 2021-06-09 16:55:58 | 2021-06-09 18:13:10 | 2021-06-09 18:15:09 | 0:01:59 | 0 | smithi | master | centos | 8.2 | rados/cephadm/dashboard/0-distro/centos_8.2_kubic_stable | — | ||
Failure Reason:
list index out of range |
||||||||||||||
pass | 6163073 | 2021-06-09 16:55:59 | 2021-06-09 18:13:08 | 2021-06-09 18:37:00 | 0:23:52 | 0:12:20 | 0:11:32 | smithi | master | ubuntu | 20.04 | rados/mgr/{clusters/{2-node-mgr} debug/mgr objectstore/bluestore-comp-lz4 supported-random-distro$/{ubuntu_latest} tasks/insights} | 2 | |
fail | 6163075 | 2021-06-09 16:56:00 | 2021-06-09 18:13:38 | 2021-06-09 18:48:48 | 0:35:10 | 0:25:03 | 0:10:07 | smithi | master | ubuntu | 20.04 | rados/dashboard/{clusters/{2-node-mgr} debug/mgr objectstore/bluestore-hybrid supported-random-distro$/{ubuntu_latest} tasks/dashboard} | 2 | |
Failure Reason:
Test failure: test_host_devices (tasks.mgr.dashboard.test_host.HostControllerTest) |
||||||||||||||
pass | 6163077 | 2021-06-09 16:56:01 | 2021-06-09 18:13:39 | 2021-06-09 18:52:27 | 0:38:48 | 0:28:02 | 0:10:46 | smithi | master | ubuntu | 20.04 | rados/singleton-bluestore/{all/cephtool msgr-failures/none msgr/async-v1only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 6163079 | 2021-06-09 16:56:02 | 2021-06-09 18:14:49 | 2021-06-09 18:46:33 | 0:31:44 | 0:20:18 | 0:11:26 | smithi | master | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-recovery} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/upmap msgr-failures/fastclose msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/cache-pool-snaps} | 2 | |
fail | 6163081 | 2021-06-09 16:56:02 | 2021-06-09 18:15:01 | 2021-06-09 18:17:00 | 0:01:59 | 0 | smithi | master | centos | 8.2 | rados/cephadm/thrash/0-distro/centos_8.2_kubic_stable | — | ||
Failure Reason:
list index out of range |