User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail |
---|---|---|---|---|---|---|---|---|---|---|
kchai | 2021-03-10 06:55:19 | 2021-03-10 07:19:38 | 2021-03-10 10:22:24 | 3:02:46 | rados | wip-kefu-testing-2021-03-09-1847 | gibba | c6bef90 | 15 | 9 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
pass | 5952982 | 2021-03-10 06:56:21 | 2021-03-10 07:19:38 | 2021-03-10 08:43:57 | 1:24:19 | 0:25:06 | 0:59:13 | gibba | master | ubuntu | 18.04 | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/osd-delay objectstore/bluestore-stupid rados recovery-overrides/{more-active-recovery} supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 | |
fail | 5952983 | 2021-03-10 06:56:22 | 2021-03-10 08:08:44 | 2021-03-10 08:22:00 | 0:13:16 | 0:06:57 | 0:06:19 | gibba | master | ubuntu | 20.04 | rados/cephadm/smoke/{distro/ubuntu_20.04_kubic_stable fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
Command failed on gibba008 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:c6bef90e2f561208e1aebe76e03aaeb6fae2552e shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 3dda861a-8179-11eb-9080-001a4aab830c -- ceph orch host ls --format=json' |
||||||||||||||
pass | 5952984 | 2021-03-10 06:56:23 | 2021-03-10 08:08:44 | 2021-03-10 08:44:03 | 0:35:19 | 0:28:51 | 0:06:28 | gibba | master | ubuntu | 20.04 | rados/cephadm/thrash/{0-distro/ubuntu_20.04_kubic_testing 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async root} | 2 | |
fail | 5952985 | 2021-03-10 06:56:23 | 2021-03-10 08:08:44 | 2021-03-10 08:55:09 | 0:46:25 | 0:22:36 | 0:23:49 | gibba | master | ubuntu | 18.04 | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/fastclose objectstore/filestore-xfs rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 | |
Failure Reason:
Command crashed: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --no-omap --ec-pool --max-ops 4000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op read 100 --op write 0 --op delete 50 --op snap_create 50 --op snap_remove 50 --op rollback 50 --op setattr 25 --op rmattr 25 --op copy_from 50 --op append 100 --pool unique_pool_0' |
||||||||||||||
fail | 5952986 | 2021-03-10 06:56:24 | 2021-03-10 08:22:06 | 2021-03-10 09:01:03 | 0:38:57 | 0:11:32 | 0:27:25 | gibba | master | ubuntu | 20.04 | rados/cephadm/dashboard/{0-distro/ubuntu_20.04_kubic_testing task/test_e2e} | 2 | |
Failure Reason:
Command failed (workunit test cephadm/create_iscsi_disks.sh) on gibba041 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && cd -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=c6bef90e2f561208e1aebe76e03aaeb6fae2552e TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="1" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.1 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.1/qa/workunits/cephadm/create_iscsi_disks.sh' |
||||||||||||||
pass | 5952987 | 2021-03-10 06:56:25 | 2021-03-10 08:43:59 | 2021-03-10 09:11:07 | 0:27:08 | 0:21:23 | 0:05:45 | gibba | master | rhel | 8.3 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async-v1only objectstore/filestore-xfs rados supported-random-distro$/{rhel_8} thrashers/default thrashosds-health workloads/small-objects} | 2 | |
pass | 5952988 | 2021-03-10 06:56:26 | 2021-03-10 08:44:09 | 2021-03-10 09:23:23 | 0:39:14 | 0:33:09 | 0:06:05 | gibba | master | ubuntu | 20.04 | rados/cephadm/with-work/{0-distro/ubuntu_20.04_kubic_testing fixed-2 mode/root mon_election/connectivity msgr/async-v1only start tasks/rados_api_tests} | 2 | |
pass | 5952989 | 2021-03-10 06:56:26 | 2021-03-10 08:44:29 | 2021-03-10 09:23:27 | 0:38:58 | 0:30:31 | 0:08:27 | gibba | master | ubuntu | 18.04 | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/osd-delay objectstore/bluestore-comp-lz4 rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 | |
fail | 5952990 | 2021-03-10 06:56:27 | 2021-03-10 08:44:30 | 2021-03-10 09:06:11 | 0:21:41 | 0:15:29 | 0:06:12 | gibba | master | ubuntu | 20.04 | rados/upgrade/pacific-x/parallel/{0-start 1-tasks distro1$/{ubuntu_20.04_kubic_stable} mon_election/connectivity upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} | 2 | |
Failure Reason:
Command failed on gibba014 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/daemon-base:latest-pacific shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 6046c6c8-817e-11eb-9080-001a4aab830c -e sha1=c6bef90e2f561208e1aebe76e03aaeb6fae2552e -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | length == 1\'"\'"\'\'' |
||||||||||||||
pass | 5952991 | 2021-03-10 06:56:28 | 2021-03-10 08:45:10 | 2021-03-10 09:09:56 | 0:24:46 | 0:15:16 | 0:09:30 | gibba | master | centos | 8.3 | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v1only objectstore/bluestore-hybrid rados tasks/rados_cls_all validater/lockdep} | 2 | |
fail | 5952992 | 2021-03-10 06:56:28 | 2021-03-10 08:45:10 | 2021-03-10 09:03:18 | 0:18:08 | 0:11:15 | 0:06:53 | gibba | master | ubuntu | 20.04 | rados/cephadm/dashboard/{0-distro/ubuntu_20.04_kubic_stable task/test_e2e} | 2 | |
Failure Reason:
Command failed (workunit test cephadm/create_iscsi_disks.sh) on gibba040 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && cd -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=c6bef90e2f561208e1aebe76e03aaeb6fae2552e TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="1" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.1 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.1/qa/workunits/cephadm/create_iscsi_disks.sh' |
||||||||||||||
fail | 5952993 | 2021-03-10 06:56:29 | 2021-03-10 08:46:11 | 2021-03-10 09:11:05 | 0:24:54 | 0:13:55 | 0:10:59 | gibba | master | centos | 8.3 | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/osd-delay objectstore/bluestore-stupid rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{centos_8} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=clay-k=4-m=2} | 2 | |
Failure Reason:
Command crashed: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --no-omap --ec-pool --max-ops 4000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op read 100 --op write 0 --op delete 50 --op snap_create 50 --op snap_remove 50 --op rollback 50 --op setattr 25 --op rmattr 25 --op copy_from 50 --op append 100 --pool unique_pool_0' |
||||||||||||||
pass | 5952994 | 2021-03-10 06:56:30 | 2021-03-10 08:46:11 | 2021-03-10 09:14:09 | 0:27:58 | 0:12:59 | 0:14:59 | gibba | master | ubuntu | 20.04 | rados/cephadm/smoke/{distro/ubuntu_20.04_kubic_stable fixed-2 mon_election/classic start} | 2 | |
pass | 5952995 | 2021-03-10 06:56:30 | 2021-03-10 08:55:12 | 2021-03-10 09:38:46 | 0:43:34 | 0:31:24 | 0:12:10 | gibba | master | ubuntu | 18.04 | rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/osd-delay rados thrashers/morepggrow thrashosds-health workloads/cache-snaps} | 3 | |
fail | 5952996 | 2021-03-10 06:56:31 | 2021-03-10 08:57:33 | 2021-03-10 09:14:42 | 0:17:09 | 0:11:28 | 0:05:41 | gibba | master | ubuntu | 20.04 | rados/cephadm/dashboard/{0-distro/ubuntu_20.04_kubic_testing task/test_e2e} | 2 | |
Failure Reason:
Command failed (workunit test cephadm/create_iscsi_disks.sh) on gibba007 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && cd -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=c6bef90e2f561208e1aebe76e03aaeb6fae2552e TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="1" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.1 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.1/qa/workunits/cephadm/create_iscsi_disks.sh' |
||||||||||||||
pass | 5952997 | 2021-03-10 06:56:32 | 2021-03-10 08:57:33 | 2021-03-10 10:22:24 | 1:24:51 | 1:09:12 | 0:15:39 | gibba | master | ubuntu | 18.04 | rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/octopus backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/fastclose rados thrashers/none thrashosds-health workloads/radosbench} | 3 | |
pass | 5952998 | 2021-03-10 06:56:32 | 2021-03-10 09:03:24 | 2021-03-10 09:45:12 | 0:41:48 | 0:29:54 | 0:11:54 | gibba | master | ubuntu | 18.04 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/few rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/ec-pool-snaps-few-objects-overwrites} | 2 | |
pass | 5952999 | 2021-03-10 06:56:33 | 2021-03-10 09:06:14 | 2021-03-10 09:45:12 | 0:38:58 | 0:28:28 | 0:10:30 | gibba | master | ubuntu | 20.04 | rados/cephadm/thrash/{0-distro/ubuntu_20.04_kubic_testing 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async-v2only root} | 2 | |
pass | 5953000 | 2021-03-10 06:56:34 | 2021-03-10 09:10:05 | 2021-03-10 09:48:01 | 0:37:56 | 0:25:39 | 0:12:17 | gibba | master | centos | 8.3 | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/osd-delay objectstore/bluestore-comp-lz4 rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{centos_8} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} | 3 | |
fail | 5953001 | 2021-03-10 06:56:34 | 2021-03-10 09:11:15 | 2021-03-10 09:34:15 | 0:23:00 | 0:14:11 | 0:08:49 | gibba | master | ubuntu | 18.04 | rados/upgrade/pacific-x/parallel/{0-start 1-tasks distro1$/{ubuntu_18.04} mon_election/classic upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} | 2 | |
Failure Reason:
Command failed on gibba016 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/daemon-base:latest-pacific shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 5b702744-8182-11eb-9080-001a4aab830c -e sha1=c6bef90e2f561208e1aebe76e03aaeb6fae2552e -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | length == 1\'"\'"\'\'' |
||||||||||||||
fail | 5953002 | 2021-03-10 06:56:35 | 2021-03-10 09:11:15 | 2021-03-10 09:31:19 | 0:20:04 | 0:11:00 | 0:09:04 | gibba | master | ubuntu | 20.04 | rados/cephadm/dashboard/{0-distro/ubuntu_20.04_kubic_stable task/test_e2e} | 2 | |
Failure Reason:
Command failed (workunit test cephadm/create_iscsi_disks.sh) on gibba008 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && cd -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=c6bef90e2f561208e1aebe76e03aaeb6fae2552e TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="1" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.1 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.1/qa/workunits/cephadm/create_iscsi_disks.sh' |
||||||||||||||
pass | 5953003 | 2021-03-10 06:56:36 | 2021-03-10 09:14:16 | 2021-03-10 09:49:13 | 0:34:57 | 0:24:51 | 0:10:06 | gibba | master | centos | 8.3 | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/osd-delay objectstore/bluestore-stupid rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{centos_8} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 | |
pass | 5953004 | 2021-03-10 06:56:36 | 2021-03-10 09:14:46 | 2021-03-10 10:00:27 | 0:45:41 | 0:27:03 | 0:18:38 | gibba | master | centos | 8.3 | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8} tasks/rados_api_tests} | 2 | |
pass | 5953005 | 2021-03-10 06:56:37 | 2021-03-10 09:23:27 | 2021-03-10 09:46:01 | 0:22:34 | 0:17:37 | 0:04:57 | gibba | master | rhel | 8.3 | rados/singleton/{all/rebuild-mondb mon_election/classic msgr-failures/many msgr/async objectstore/bluestore-comp-zlib rados supported-random-distro$/{rhel_8}} | 1 |