User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
kchai | 2021-03-11 16:25:48 | 2021-03-11 16:30:28 | 2021-03-11 21:53:50 | 5:23:22 | rados | wip-kefu-testing-2021-03-11-2158 | gibba | 54c3c60 | 15 | 20 | 3 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
pass | 5956158 | 2021-03-11 16:27:02 | 2021-03-11 16:30:28 | 2021-03-11 21:53:50 | 5:23:22 | 4:37:44 | 0:45:38 | gibba | master | centos | 8.3 | rados/objectstore/{backends/objectstore supported-random-distro$/{centos_8}} | 1 | |
pass | 5956159 | 2021-03-11 16:27:03 | 2021-03-11 16:30:38 | 2021-03-11 17:08:44 | 0:38:06 | 0:25:04 | 0:13:02 | gibba | master | ubuntu | 18.04 | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/osd-delay objectstore/bluestore-comp-zstd rados recovery-overrides/{more-active-recovery} supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} | 3 | |
fail | 5956160 | 2021-03-11 16:27:04 | 2021-03-11 16:33:29 | 2021-03-11 16:48:38 | 0:15:09 | 0:09:20 | 0:05:49 | gibba | master | ubuntu | 20.04 | rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04 2-repo_digest/defaut 3-start-upgrade 4-wait fixed-2 mon_election/connectivity} | 2 | |
Failure Reason:
Command failed on gibba026 with status 22: "sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v15.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid ae1d0c9e-8288-11eb-9082-001a4aab830c -- ceph orch apply iscsi iscsi user password --placement '1;gibba026=iscsi.a'" |
||||||||||||||
dead | 5956161 | 2021-03-11 16:27:05 | 2021-03-11 16:33:29 | 2021-03-11 16:51:54 | 0:18:25 | gibba | master | centos | 8.3 | rados/singleton-bluestore/{all/cephtool mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8}} | 1 | |||
Failure Reason:
Error reimaging machines: reached maximum tries (60) after waiting for 900 seconds |
||||||||||||||
pass | 5956162 | 2021-03-11 16:27:06 | 2021-03-11 16:36:00 | 2021-03-11 17:57:53 | 1:21:53 | 1:11:06 | 0:10:47 | gibba | master | ubuntu | 18.04 | rados/dashboard/{clusters/{2-node-mgr} debug/mgr mon_election/classic random-objectstore$/{bluestore-comp-lz4} supported-random-distro$/{ubuntu_latest} tasks/dashboard} | 2 | |
fail | 5956163 | 2021-03-11 16:27:07 | 2021-03-11 16:36:40 | 2021-03-11 16:52:26 | 0:15:46 | 0:06:32 | 0:09:14 | gibba | master | ubuntu | 20.04 | rados/upgrade/pacific-x/parallel/{0-start 1-tasks distro1$/{ubuntu_20.04_kubic_testing} mon_election/classic upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} | 2 | |
Failure Reason:
Command failed on gibba043 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/daemon-base:latest-pacific shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid bbd1f2d6-8289-11eb-9083-001a4aab830c -- ceph orch host add gibba043' |
||||||||||||||
pass | 5956164 | 2021-03-11 16:27:08 | 2021-03-11 16:39:21 | 2021-03-11 17:14:41 | 0:35:20 | 0:24:24 | 0:10:56 | gibba | master | centos | 8.3 | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-hybrid rados tasks/rados_api_tests validater/lockdep} | 2 | |
fail | 5956165 | 2021-03-11 16:27:08 | 2021-03-11 16:39:51 | 2021-03-11 17:01:09 | 0:21:18 | 0:11:46 | 0:09:32 | gibba | master | ubuntu | 20.04 | rados/cephadm/dashboard/{0-distro/ubuntu_20.04_kubic_stable task/test_e2e} | 2 | |
Failure Reason:
Command failed (workunit test cephadm/create_iscsi_disks.sh) on gibba044 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && cd -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=54c3c60e724db8935de8cae1fa5d6210af4a2cc6 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="1" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.1 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.1/qa/workunits/cephadm/create_iscsi_disks.sh' |
||||||||||||||
fail | 5956166 | 2021-03-11 16:27:09 | 2021-03-11 16:42:02 | 2021-03-11 16:57:19 | 0:15:17 | 0:09:23 | 0:05:54 | gibba | master | ubuntu | 20.04 | rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04 2-repo_digest/defaut 3-start-upgrade 4-wait fixed-2 mon_election/classic} | 2 | |
Failure Reason:
Command failed on gibba033 with status 22: "sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v15.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid ec6e8472-8289-11eb-9083-001a4aab830c -- ceph orch apply iscsi iscsi user password --placement '1;gibba033=iscsi.a'" |
||||||||||||||
fail | 5956167 | 2021-03-11 16:27:10 | 2021-03-11 16:42:12 | 2021-03-11 17:06:32 | 0:24:20 | 0:16:26 | 0:07:54 | gibba | master | rhel | 8.3 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/fastclose rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{rhel_8} thrashers/pggrow thrashosds-health workloads/ec-small-objects-overwrites} | 2 | |
Failure Reason:
Command crashed: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --no-omap --max-ops 400000 --objects 1024 --max-in-flight 64 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 600 --op read 100 --op write 50 --op delete 50 --op snap_create 50 --op snap_remove 50 --op rollback 50 --op setattr 25 --op rmattr 25 --op copy_from 50 --op append 50 --op write_excl 50 --op append_excl 50 --pool unique_pool_0' |
||||||||||||||
fail | 5956168 | 2021-03-11 16:27:11 | 2021-03-11 16:43:43 | 2021-03-11 17:00:40 | 0:16:57 | 0:09:36 | 0:07:21 | gibba | master | ubuntu | 20.04 | rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04 2-repo_digest/repo_digest 3-start-upgrade 4-wait fixed-2 mon_election/connectivity} | 2 | |
Failure Reason:
Command failed on gibba045 with status 22: "sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v15.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 68ce6d8e-828a-11eb-9083-001a4aab830c -- ceph orch apply iscsi iscsi user password --placement '1;gibba045=iscsi.a'" |
||||||||||||||
pass | 5956169 | 2021-03-11 16:27:12 | 2021-03-11 16:45:43 | 2021-03-11 17:21:15 | 0:35:32 | 0:24:37 | 0:10:55 | gibba | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-partial-recovery} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/cache-pool-snaps-readproxy} | 2 | |
pass | 5956170 | 2021-03-11 16:27:13 | 2021-03-11 16:48:04 | 2021-03-11 17:25:25 | 0:37:21 | 0:27:07 | 0:10:14 | gibba | master | centos | 8.3 | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/osd-delay objectstore/bluestore-comp-lz4 rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{centos_8} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=clay-k=4-m=2} | 2 | |
fail | 5956171 | 2021-03-11 16:27:13 | 2021-03-11 16:48:34 | 2021-03-11 17:07:49 | 0:19:15 | 0:12:05 | 0:07:10 | gibba | master | ubuntu | 20.04 | rados/cephadm/dashboard/{0-distro/ubuntu_20.04_kubic_testing task/test_e2e} | 2 | |
Failure Reason:
Command failed (workunit test cephadm/create_iscsi_disks.sh) on gibba026 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && cd -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=54c3c60e724db8935de8cae1fa5d6210af4a2cc6 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="1" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.1 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.1/qa/workunits/cephadm/create_iscsi_disks.sh' |
||||||||||||||
pass | 5956172 | 2021-03-11 16:27:14 | 2021-03-11 16:48:45 | 2021-03-11 17:09:06 | 0:20:21 | 0:13:03 | 0:07:18 | gibba | master | ubuntu | 20.04 | rados/cephadm/smoke-roleless/{distro/ubuntu_20.04_kubic_stable start} | 2 | |
fail | 5956173 | 2021-03-11 16:27:15 | 2021-03-11 16:49:55 | 2021-03-11 17:07:54 | 0:17:59 | 0:09:46 | 0:08:13 | gibba | master | ubuntu | 20.04 | rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04 2-repo_digest/repo_digest 3-start-upgrade 4-wait fixed-2 mon_election/classic} | 2 | |
Failure Reason:
Command failed on gibba028 with status 22: "sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v15.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 2634ea10-828b-11eb-9083-001a4aab830c -- ceph orch apply iscsi iscsi user password --placement '1;gibba028=iscsi.a'" |
||||||||||||||
pass | 5956174 | 2021-03-11 16:27:16 | 2021-03-11 16:50:46 | 2021-03-11 17:14:26 | 0:23:40 | 0:16:47 | 0:06:53 | gibba | master | rhel | 8.3 | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-snappy rados supported-random-distro$/{rhel_8} tasks/repair_test} | 2 | |
fail | 5956175 | 2021-03-11 16:27:17 | 2021-03-11 16:51:36 | 2021-03-11 17:17:19 | 0:25:43 | 0:18:17 | 0:07:26 | gibba | master | rhel | 8.3 | rados/mgr/{clusters/{2-node-mgr} debug/mgr mon_election/classic objectstore/bluestore-stupid supported-random-distro$/{rhel_8} tasks/progress} | 2 | |
Failure Reason:
Test failure: test_pool_removal (tasks.mgr.test_progress.TestProgress) |
||||||||||||||
pass | 5956176 | 2021-03-11 16:27:18 | 2021-03-11 16:52:37 | 2021-03-11 17:28:30 | 0:35:53 | 0:21:05 | 0:14:48 | gibba | master | ubuntu | 18.04 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/normal mon_election/classic msgr-failures/few rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/ec-small-objects-fast-read-overwrites} | 2 | |
fail | 5956177 | 2021-03-11 16:27:19 | 2021-03-11 16:57:28 | 2021-03-11 17:15:51 | 0:18:23 | 0:09:14 | 0:09:09 | gibba | master | ubuntu | 20.04 | rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04 2-repo_digest/defaut 3-start-upgrade 4-wait fixed-2 mon_election/connectivity} | 2 | |
Failure Reason:
Command failed on gibba045 with status 22: "sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v15.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 7f509800-828c-11eb-9083-001a4aab830c -- ceph orch apply iscsi iscsi user password --placement '1;gibba045=iscsi.a'" |
||||||||||||||
fail | 5956178 | 2021-03-11 16:27:20 | 2021-03-11 17:00:50 | 2021-03-11 17:18:10 | 0:17:20 | 0:10:42 | 0:06:38 | gibba | master | ubuntu | 20.04 | rados/cephadm/with-work/{0-distro/ubuntu_20.04_kubic_stable fixed-2 mode/root mon_election/classic msgr/async-v2only start tasks/rados_python} | 2 | |
Failure Reason:
Command failed on gibba044 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:54c3c60e724db8935de8cae1fa5d6210af4a2cc6 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 1cd5a066-828d-11eb-9083-001a4aab830c -- ceph orch host ls --format=json' |
||||||||||||||
pass | 5956179 | 2021-03-11 16:27:21 | 2021-03-11 17:01:10 | 2021-03-11 18:29:31 | 1:28:21 | 1:16:46 | 0:11:35 | gibba | master | rhel | 8.3 | rados/dashboard/{clusters/{2-node-mgr} debug/mgr mon_election/connectivity random-objectstore$/{bluestore-stupid} supported-random-distro$/{rhel_8} tasks/dashboard} | 2 | |
fail | 5956180 | 2021-03-11 16:27:22 | 2021-03-11 17:06:41 | 2021-03-11 17:22:49 | 0:16:08 | 0:08:19 | 0:07:49 | gibba | master | ubuntu | 20.04 | rados/upgrade/pacific-x/parallel/{0-start 1-tasks distro1$/{ubuntu_20.04} mon_election/connectivity upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} | 2 | |
Failure Reason:
Command failed on gibba026 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/daemon-base:latest-pacific shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f4aed7a0-828d-11eb-9083-001a4aab830c -- ceph orch host add gibba026' |
||||||||||||||
fail | 5956181 | 2021-03-11 16:27:22 | 2021-03-11 17:07:52 | 2021-03-11 17:36:35 | 0:28:43 | 0:16:57 | 0:11:46 | gibba | master | centos | 8.3 | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/osd-delay objectstore/bluestore-comp-lz4 rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} | 3 | |
Failure Reason:
Command crashed: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --no-omap --ec-pool --max-ops 4000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op read 100 --op write 0 --op delete 50 --op snap_create 50 --op snap_remove 50 --op rollback 50 --op setattr 25 --op rmattr 25 --op copy_from 50 --op append 100 --pool unique_pool_0' |
||||||||||||||
dead | 5956182 | 2021-03-11 16:27:23 | 2021-03-11 17:08:03 | 2021-03-11 17:23:57 | 0:15:54 | gibba | master | ubuntu | 20.04 | rados/cephadm/dashboard/{0-distro/ubuntu_20.04_kubic_stable task/test_e2e} | 2 | |||
Failure Reason:
Error reimaging machines: reached maximum tries (60) after waiting for 900 seconds |
||||||||||||||
fail | 5956183 | 2021-03-11 16:27:24 | 2021-03-11 17:08:03 | 2021-03-11 17:39:44 | 0:31:41 | 0:23:42 | 0:07:59 | gibba | master | rhel | 8.3 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/fast mon_election/connectivity msgr-failures/osd-delay rados recovery-overrides/{more-active-recovery} supported-random-distro$/{rhel_8} thrashers/fastread thrashosds-health workloads/ec-small-objects-overwrites} | 2 | |
Failure Reason:
Command crashed: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --no-omap --max-ops 400000 --objects 1024 --max-in-flight 64 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 600 --op read 100 --op write 50 --op delete 50 --op snap_create 50 --op snap_remove 50 --op rollback 50 --op setattr 25 --op rmattr 25 --op copy_from 50 --op append 50 --op write_excl 50 --op append_excl 50 --pool unique_pool_0' |
||||||||||||||
pass | 5956184 | 2021-03-11 16:27:26 | 2021-03-11 17:08:53 | 2021-03-11 17:28:04 | 0:19:11 | 0:13:04 | 0:06:07 | gibba | master | rhel | 8.3 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-partial-recovery} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/few msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{rhel_8} thrashers/mapgap thrashosds-health workloads/redirect} | 2 | |
fail | 5956185 | 2021-03-11 16:27:27 | 2021-03-11 17:09:14 | 2021-03-11 17:26:02 | 0:16:48 | 0:09:33 | 0:07:15 | gibba | master | ubuntu | 20.04 | rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04 2-repo_digest/defaut 3-start-upgrade 4-wait fixed-2 mon_election/classic} | 2 | |
Failure Reason:
Command failed on gibba039 with status 22: "sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v15.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid ef59c4b8-828d-11eb-9083-001a4aab830c -- ceph orch apply iscsi iscsi user password --placement '1;gibba039=iscsi.a'" |
||||||||||||||
fail | 5956186 | 2021-03-11 16:27:28 | 2021-03-11 17:10:55 | 2021-03-11 17:36:00 | 0:25:05 | 0:18:59 | 0:06:06 | gibba | master | ubuntu | 20.04 | rados/cephadm/workunits/{0-distro/ubuntu_20.04_kubic_stable mon_election/connectivity task/test_orch_cli} | 1 | |
Failure Reason:
Test failure: test_cephfs_mirror (tasks.cephadm_cases.test_cli.TestCephadmCLI) |
||||||||||||||
fail | 5956187 | 2021-03-11 16:27:29 | 2021-03-11 17:10:55 | 2021-03-11 17:29:51 | 0:18:56 | 0:09:14 | 0:09:42 | gibba | master | ubuntu | 20.04 | rados/cephadm/thrash/{0-distro/ubuntu_20.04_kubic_testing 1-start 2-thrash 3-tasks/snaps-few-objects fixed-2 msgr/async-v2only root} | 2 | |
Failure Reason:
Command failed on gibba041 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:54c3c60e724db8935de8cae1fa5d6210af4a2cc6 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid fb8c2be4-828e-11eb-9083-001a4aab830c -- ceph orch host ls --format=json' |
||||||||||||||
pass | 5956188 | 2021-03-11 16:27:30 | 2021-03-11 17:14:36 | 2021-03-11 17:41:22 | 0:26:46 | 0:21:36 | 0:05:10 | gibba | master | rhel | 8.3 | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/fastclose objectstore/bluestore-hybrid rados recovery-overrides/{default} supported-random-distro$/{rhel_8} thrashers/fastread thrashosds-health workloads/ec-small-objects-balanced} | 2 | |
fail | 5956189 | 2021-03-11 16:27:31 | 2021-03-11 17:14:46 | 2021-03-11 17:30:54 | 0:16:08 | 0:09:23 | 0:06:45 | gibba | master | ubuntu | 20.04 | rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04 2-repo_digest/repo_digest 3-start-upgrade 4-wait fixed-2 mon_election/connectivity} | 2 | |
Failure Reason:
Command failed on gibba045 with status 22: "sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v15.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 9663d1d6-828e-11eb-9083-001a4aab830c -- ceph orch apply iscsi iscsi user password --placement '1;gibba045=iscsi.a'" |
||||||||||||||
pass | 5956190 | 2021-03-11 16:27:32 | 2021-03-11 17:15:57 | 2021-03-11 17:52:32 | 0:36:35 | 0:24:40 | 0:11:55 | gibba | master | ubuntu | 18.04 | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few objectstore/bluestore-comp-zlib rados recovery-overrides/{more-active-recovery} supported-random-distro$/{ubuntu_latest} thrashers/mapgap thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 | |
pass | 5956191 | 2021-03-11 16:27:32 | 2021-03-11 17:17:27 | 2021-03-11 17:43:18 | 0:25:51 | 0:15:26 | 0:10:25 | gibba | master | ubuntu | 18.04 | rados/singleton/{all/rebuild-mondb mon_election/classic msgr-failures/many msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest}} | 1 | |
fail | 5956192 | 2021-03-11 16:27:33 | 2021-03-11 17:18:17 | 2021-03-11 17:40:29 | 0:22:12 | 0:12:48 | 0:09:24 | gibba | master | ubuntu | 20.04 | rados/cephadm/dashboard/{0-distro/ubuntu_20.04_kubic_testing task/test_e2e} | 2 | |
Failure Reason:
Command failed (workunit test cephadm/create_iscsi_disks.sh) on gibba030 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && cd -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=54c3c60e724db8935de8cae1fa5d6210af4a2cc6 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="1" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.1 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.1/qa/workunits/cephadm/create_iscsi_disks.sh' |
||||||||||||||
pass | 5956193 | 2021-03-11 16:27:34 | 2021-03-11 17:21:18 | 2021-03-11 17:43:53 | 0:22:35 | 0:13:58 | 0:08:37 | gibba | master | ubuntu | 20.04 | rados/cephadm/smoke-roleless/{distro/ubuntu_20.04_kubic_stable start} | 2 | |
dead | 5956194 | 2021-03-11 16:27:35 | 2021-03-11 17:22:59 | 2021-03-11 17:39:55 | 0:16:56 | gibba | master | ubuntu | 20.04 | rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04 2-repo_digest/repo_digest 3-start-upgrade 4-wait fixed-2 mon_election/classic} | 2 | |||
Failure Reason:
Error reimaging machines: reached maximum tries (60) after waiting for 900 seconds |
||||||||||||||
fail | 5956195 | 2021-03-11 16:27:36 | 2021-03-11 17:23:59 | 2021-03-11 17:46:19 | 0:22:20 | 0:14:03 | 0:08:17 | gibba | master | rhel | 8.3 | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/osd-delay objectstore/bluestore-comp-zstd rados recovery-overrides/{more-active-recovery} supported-random-distro$/{rhel_8} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 | |
Failure Reason:
Command crashed: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --no-omap --ec-pool --max-ops 4000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op read 100 --op write 0 --op delete 50 --op snap_create 50 --op snap_remove 50 --op rollback 50 --op setattr 25 --op rmattr 25 --op copy_from 50 --op append 100 --pool unique_pool_0' |