User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
rfriedma | 2021-05-21 12:26:30 | 2021-05-21 13:07:11 | 2021-05-22 03:13:49 | 14:06:38 | rados | wip-ronenf-scrub-sched | gibba | 6a6b450 | 19 | 2 | 3 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
pass | 6126449 | 2021-05-21 12:29:07 | 2021-05-21 12:55:39 | 2021-05-21 13:36:01 | 0:40:22 | 0:23:16 | 0:17:06 | gibba | master | rhel | 8.3 | rados/monthrash/{ceph clusters/9-mons mon_election/classic msgr-failures/mon-delay msgr/async objectstore/bluestore-bitmap rados supported-random-distro$/{rhel_8} thrashers/force-sync-many workloads/pool-create-delete} | 2 | |
pass | 6126450 | 2021-05-21 12:29:08 | 2021-05-21 13:07:11 | 2021-05-21 14:03:50 | 0:56:39 | 0:49:59 | 0:06:40 | gibba | master | rhel | 8.3 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-partial-recovery} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/crush-compat mon_election/connectivity msgr-failures/fastclose msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/admin_socket_objecter_requests} | 2 | |
pass | 6126451 | 2021-05-21 12:29:09 | 2021-05-21 13:07:12 | 2021-05-21 14:05:22 | 0:58:10 | 0:34:53 | 0:23:17 | gibba | master | centos | 8.3 | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/classic msgr-failures/osd-delay objectstore/bluestore-bitmap rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{centos_8} thrashers/fastread thrashosds-health workloads/ec-rados-plugin=clay-k=4-m=2} | 2 | |
dead | 6126452 | 2021-05-21 12:29:10 | 2021-05-21 13:20:34 | 2021-05-22 01:29:04 | 12:08:30 | gibba | master | centos | 8.3 | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-bitmap rados recovery-overrides/{default} supported-random-distro$/{centos_8} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} | 3 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6126453 | 2021-05-21 12:29:11 | 2021-05-21 13:20:34 | 2021-05-21 14:20:40 | 1:00:06 | 0:34:42 | 0:25:24 | gibba | master | centos | 8.3 | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/fastclose objectstore/bluestore-bitmap rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{centos_8} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 | |
pass | 6126454 | 2021-05-21 12:29:11 | 2021-05-21 13:36:07 | 2021-05-21 14:42:40 | 1:06:33 | 0:29:16 | 0:37:17 | gibba | master | centos | 8.3 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/normal mon_election/classic msgr-failures/osd-delay rados recovery-overrides/{default} supported-random-distro$/{centos_8} thrashers/morepggrow thrashosds-health workloads/ec-snaps-few-objects-overwrites} | 2 | |
pass | 6126455 | 2021-05-21 12:29:12 | 2021-05-21 14:03:51 | 2021-05-21 14:43:51 | 0:40:00 | 0:10:36 | 0:29:24 | gibba | master | ubuntu | 20.04 | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/connectivity msgr-failures/fastclose objectstore/bluestore-low-osd-mem-target rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 | |
pass | 6126456 | 2021-05-21 12:29:13 | 2021-05-21 14:20:43 | 2021-05-21 15:25:56 | 1:05:13 | 0:33:00 | 0:32:13 | gibba | master | ubuntu | 18.04 | rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/luminous-v1only backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/fastclose rados thrashers/careful thrashosds-health workloads/cache-snaps} | 3 | |
pass | 6126457 | 2021-05-21 12:29:14 | 2021-05-21 14:42:46 | 2021-05-21 15:04:49 | 0:22:03 | 0:11:21 | 0:10:42 | gibba | master | centos | 8.3 | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-bitmap rados tasks/mon_recovery validater/lockdep} | 2 | |
pass | 6126458 | 2021-05-21 12:29:15 | 2021-05-21 14:43:57 | 2021-05-21 15:25:10 | 0:41:13 | 0:31:12 | 0:10:01 | gibba | master | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-partial-recovery} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/on mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/cache-agent-big} | 2 | |
pass | 6126459 | 2021-05-21 12:29:16 | 2021-05-21 14:43:58 | 2021-05-21 15:10:08 | 0:26:10 | 0:12:36 | 0:13:34 | gibba | master | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/crush-compat mon_election/connectivity msgr-failures/osd-delay msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest} thrashers/mapgap thrashosds-health workloads/cache-agent-small} | 2 | |
dead | 6126460 | 2021-05-21 12:29:17 | 2021-05-21 14:46:59 | 2021-05-22 02:55:42 | 12:08:43 | gibba | master | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-active-recovery} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/on mon_election/classic msgr-failures/osd-dispatch-delay msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{ubuntu_latest} thrashers/morepggrow thrashosds-health workloads/cache-pool-snaps-readproxy} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6126461 | 2021-05-21 12:29:18 | 2021-05-21 14:47:00 | 2021-05-21 15:16:09 | 0:29:09 | 0:18:42 | 0:10:27 | gibba | master | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-recovery} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/crush-compat mon_election/connectivity msgr-failures/fastclose msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{ubuntu_latest} thrashers/none thrashosds-health workloads/cache-pool-snaps} | 2 | |
pass | 6126462 | 2021-05-21 12:29:19 | 2021-05-21 14:47:01 | 2021-05-21 15:30:15 | 0:43:14 | 0:19:51 | 0:23:23 | gibba | master | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-active-recovery} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/on mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/cache-snaps-balanced} | 2 | |
pass | 6126463 | 2021-05-21 12:29:20 | 2021-05-21 15:01:04 | 2021-05-21 15:35:54 | 0:34:50 | 0:28:35 | 0:06:15 | gibba | master | rhel | 8.3 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-partial-recovery} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/crush-compat mon_election/connectivity msgr-failures/osd-delay msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/cache-snaps} | 2 | |
dead | 6126464 | 2021-05-21 12:29:21 | 2021-05-21 15:01:04 | 2021-05-22 03:13:49 | 12:12:45 | gibba | master | centos | 8.3 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-partial-recovery} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/on mon_election/classic msgr-failures/osd-dispatch-delay msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/cache} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6126465 | 2021-05-21 12:29:22 | 2021-05-21 15:04:56 | 2021-05-21 15:33:06 | 0:28:10 | 0:15:51 | 0:12:19 | gibba | master | rhel | 8.3 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-partial-recovery} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/crush-compat mon_election/connectivity msgr-failures/fastclose msgr/async objectstore/filestore-xfs rados supported-random-distro$/{rhel_8} thrashers/mapgap thrashosds-health workloads/dedup-io-mixed} | 2 | |
pass | 6126466 | 2021-05-21 12:29:23 | 2021-05-21 15:10:17 | 2021-05-21 15:37:12 | 0:26:55 | 0:11:07 | 0:15:48 | gibba | master | centos | 8.3 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-recovery} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/on mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{centos_8} thrashers/morepggrow thrashosds-health workloads/dedup-io-snaps} | 2 | |
pass | 6126467 | 2021-05-21 12:29:23 | 2021-05-21 15:16:19 | 2021-05-21 16:00:04 | 0:43:45 | 0:29:37 | 0:14:08 | gibba | master | rhel | 8.3 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/crush-compat mon_election/connectivity msgr-failures/osd-delay msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8} thrashers/none thrashosds-health workloads/pool-snaps-few-objects} | 2 | |
fail | 6126468 | 2021-05-21 12:29:24 | 2021-05-21 15:25:20 | 2021-05-21 18:57:04 | 3:31:44 | 3:22:48 | 0:08:56 | gibba | master | rhel | 8.3 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{default} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/on mon_election/classic msgr-failures/osd-dispatch-delay msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{rhel_8} thrashers/pggrow thrashosds-health workloads/rados_api_tests} | 2 | |
Failure Reason:
Command failed (workunit test rados/test.sh) on gibba028 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=6a6b450123579f7461f23b9dcbc9d719fb079ffa TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh' |
||||||||||||||
pass | 6126469 | 2021-05-21 12:29:25 | 2021-05-21 15:26:02 | 2021-05-21 16:50:50 | 1:24:48 | 1:15:59 | 0:08:49 | gibba | master | centos | 8.3 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-recovery} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/crush-compat mon_election/connectivity msgr-failures/fastclose msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/radosbench-high-concurrency} | 2 | |
pass | 6126470 | 2021-05-21 12:29:26 | 2021-05-21 15:26:03 | 2021-05-21 16:55:20 | 1:29:17 | 1:19:06 | 0:10:11 | gibba | master | rhel | 8.3 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-recovery} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/on mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{rhel_8} thrashers/default thrashosds-health workloads/radosbench} | 2 | |
pass | 6126471 | 2021-05-21 12:29:27 | 2021-05-21 15:30:24 | 2021-05-21 15:56:13 | 0:25:49 | 0:17:40 | 0:08:09 | gibba | master | rhel | 8.3 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-partial-recovery} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/crush-compat mon_election/connectivity msgr-failures/osd-delay msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{rhel_8} thrashers/mapgap thrashosds-health workloads/redirect} | 2 | |
fail | 6126472 | 2021-05-21 12:29:28 | 2021-05-21 15:33:15 | 2021-05-21 16:12:48 | 0:39:33 | 0:29:10 | 0:10:23 | gibba | master | rhel | 8.3 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-partial-recovery} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/on mon_election/classic msgr-failures/osd-dispatch-delay msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{rhel_8} thrashers/morepggrow thrashosds-health workloads/redirect_promote_tests} | 2 | |
Failure Reason:
Command failed on gibba032 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json' |