User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
rzarzynski | 2021-05-07 07:41:02 | 2021-05-07 07:42:52 | 2021-05-07 20:01:40 | 12:18:48 | rados | master | smithi | 1b18e07 | 3 | 17 | 4 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 6104530 | 2021-05-07 07:41:35 | 2021-05-07 07:42:52 | 2021-05-07 08:13:55 | 0:31:03 | 0:20:19 | 0:10:44 | smithi | master | centos | 8.3 | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-lz4 rados tasks/mon_recovery validater/lockdep} | 2 | |
Failure Reason:
Command failed on smithi077 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd dump --format=json' |
||||||||||||||
fail | 6104531 | 2021-05-07 07:41:36 | 2021-05-07 07:43:34 | 2021-05-07 10:01:31 | 2:17:57 | 2:11:32 | 0:06:25 | smithi | master | rhel | 8.3 | rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8} thrashers/force-sync-many workloads/rados_mon_workunits} | 2 | |
Failure Reason:
Command failed (workunit test mon/caps.sh) on smithi152 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=1b18e07603ebfbd16486528f11d2a761732592a5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/mon/caps.sh' |
||||||||||||||
pass | 6104532 | 2021-05-07 07:41:37 | 2021-05-07 07:44:14 | 2021-05-07 08:05:21 | 0:21:07 | 0:08:51 | 0:12:16 | smithi | master | centos | 8.3 | rados/multimon/{clusters/6 mon_election/classic msgr-failures/few msgr/async-v2only no_pools objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8} tasks/mon_clock_with_skews} | 2 | |
dead | 6104533 | 2021-05-07 07:41:38 | 2021-05-07 07:45:55 | 2021-05-07 19:57:16 | 12:11:21 | smithi | master | rhel | 8.3 | rados/singleton/{all/ec-lost-unfound mon_election/classic msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{rhel_8}} | 1 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 6104534 | 2021-05-07 07:41:38 | 2021-05-07 07:46:05 | 2021-05-07 08:57:22 | 1:11:17 | 1:00:33 | 0:10:44 | smithi | master | centos | 8.2 | rados/cephadm/thrash/{0-distro/centos_8.2_kubic_stable 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async-v2only root} | 2 | |
Failure Reason:
wait_for_clean: failed before timeout expired |
||||||||||||||
dead | 6104535 | 2021-05-07 07:41:39 | 2021-05-07 07:46:16 | 2021-05-07 19:55:49 | 12:09:33 | smithi | master | centos | 8.3 | rados/singleton/{all/lost-unfound mon_election/connectivity msgr-failures/many msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{centos_8}} | 1 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 6104536 | 2021-05-07 07:41:40 | 2021-05-07 07:46:46 | 2021-05-07 11:08:32 | 3:21:46 | 3:11:27 | 0:10:19 | smithi | master | centos | 8.3 | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{centos_8} tasks/rados_stress_watch} | 2 | |
Failure Reason:
Command failed (workunit test rados/stress_watch.sh) on smithi046 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=1b18e07603ebfbd16486528f11d2a761732592a5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/stress_watch.sh' |
||||||||||||||
pass | 6104537 | 2021-05-07 07:41:41 | 2021-05-07 07:46:47 | 2021-05-07 08:06:19 | 0:19:32 | 0:09:41 | 0:09:51 | smithi | master | centos | 8.3 | rados/multimon/{clusters/3 mon_election/connectivity msgr-failures/many msgr/async-v2only no_pools objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8} tasks/mon_clock_with_skews} | 2 | |
fail | 6104538 | 2021-05-07 07:41:41 | 2021-05-07 07:47:17 | 2021-05-07 08:28:26 | 0:41:09 | 0:34:59 | 0:06:10 | smithi | master | rhel | 8.3 | rados/cephadm/with-work/{0-distro/rhel_8.3_kubic_stable fixed-2 mode/root mon_election/connectivity msgr/async-v2only start tasks/rados_api_tests} | 2 | |
Failure Reason:
timeout expired in wait_until_healthy |
||||||||||||||
fail | 6104539 | 2021-05-07 07:41:42 | 2021-05-07 07:47:28 | 2021-05-07 08:22:51 | 0:35:23 | 0:24:55 | 0:10:28 | smithi | master | centos | 8.2 | rados/cephadm/thrash/{0-distro/centos_8.2_kubic_stable 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async-v2only root} | 2 | |
Failure Reason:
timeout expired in wait_until_healthy |
||||||||||||||
dead | 6104540 | 2021-05-07 07:41:43 | 2021-05-07 07:47:48 | 2021-05-07 19:56:43 | 12:08:55 | smithi | master | centos | 8.3 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-active-recovery} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-dispatch-delay msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/small-objects-balanced} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 6104541 | 2021-05-07 07:41:44 | 2021-05-07 07:47:59 | 2021-05-07 08:14:56 | 0:26:57 | 0:17:08 | 0:09:49 | smithi | master | centos | 8.3 | rados/singleton/{all/osd-recovery-incomplete mon_election/classic msgr-failures/many msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{centos_8}} | 1 | |
Failure Reason:
Command failed on smithi073 with status 6: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph tell osd.3 flush_pg_stats' |
||||||||||||||
fail | 6104542 | 2021-05-07 07:41:45 | 2021-05-07 07:47:59 | 2021-05-07 11:09:54 | 3:21:55 | 3:14:49 | 0:07:06 | smithi | master | rhel | 8.3 | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8} tasks/rados_workunit_loadgen_mix} | 2 | |
Failure Reason:
Command failed (workunit test rados/load-gen-mix.sh) on smithi090 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=1b18e07603ebfbd16486528f11d2a761732592a5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/load-gen-mix.sh' |
||||||||||||||
pass | 6104543 | 2021-05-07 07:41:45 | 2021-05-07 07:48:10 | 2021-05-07 08:11:36 | 0:23:26 | 0:12:44 | 0:10:42 | smithi | master | centos | 8.3 | rados/singleton/{all/pg-autoscaler-progress-off mon_election/connectivity msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8}} | 2 | |
fail | 6104544 | 2021-05-07 07:41:46 | 2021-05-07 07:48:30 | 2021-05-07 08:20:29 | 0:31:59 | 0:20:50 | 0:11:09 | smithi | master | centos | 8.3 | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-stupid rados tasks/mon_recovery validater/lockdep} | 2 | |
Failure Reason:
Command failed on smithi018 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd dump --format=json' |
||||||||||||||
fail | 6104545 | 2021-05-07 07:41:47 | 2021-05-07 07:49:51 | 2021-05-07 08:25:24 | 0:35:33 | 0:25:22 | 0:10:11 | smithi | master | centos | 8.2 | rados/cephadm/with-work/{0-distro/centos_8.2_kubic_stable fixed-2 mode/packaged mon_election/classic msgr/async-v2only start tasks/rados_python} | 2 | |
Failure Reason:
timeout expired in wait_until_healthy |
||||||||||||||
fail | 6104546 | 2021-05-07 07:41:48 | 2021-05-07 07:50:21 | 2021-05-07 08:09:21 | 0:19:00 | 0:08:47 | 0:10:13 | smithi | master | centos | 8.3 | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8} tasks/repair_test} | 2 | |
Failure Reason:
Command failed on smithi062 with status 22: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 0 ceph --cluster ceph --admin-daemon /var/run/ceph/ceph-osd.5.asok injectmdataerr repair_pool_1 repair_test_obj' |
||||||||||||||
fail | 6104547 | 2021-05-07 07:41:49 | 2021-05-07 07:50:22 | 2021-05-07 08:15:54 | 0:25:32 | 0:10:18 | 0:15:14 | smithi | master | centos | 8.3 | rados/singleton-bluestore/{all/cephtool mon_election/connectivity msgr-failures/none msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8}} | 1 | |
Failure Reason:
Command failed (workunit test cephtool/test.sh) on smithi194 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=1b18e07603ebfbd16486528f11d2a761732592a5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh' |
||||||||||||||
fail | 6104548 | 2021-05-07 07:41:49 | 2021-05-07 07:51:12 | 2021-05-07 11:34:38 | 3:43:26 | 3:37:22 | 0:06:04 | smithi | master | rhel | 8.3 | rados/singleton/{all/test_envlibrados_for_rocksdb mon_election/classic msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8}} | 1 | |
Failure Reason:
Command failed (workunit test rados/test_envlibrados_for_rocksdb.sh) on smithi121 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=1b18e07603ebfbd16486528f11d2a761732592a5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_envlibrados_for_rocksdb.sh' |
||||||||||||||
fail | 6104549 | 2021-05-07 07:41:50 | 2021-05-07 07:51:22 | 2021-05-07 08:14:45 | 0:23:23 | 0:16:46 | 0:06:37 | smithi | master | rhel | 8.3 | rados/singleton/{all/thrash-rados/{thrash-rados thrashosds-health} mon_election/connectivity msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{rhel_8}} | 2 | |
Failure Reason:
Command failed on smithi082 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f --cluster ceph -i 2 2>> /var/log/ceph/ceph-osd.2.log' |
||||||||||||||
dead | 6104550 | 2021-05-07 07:41:51 | 2021-05-07 07:51:53 | 2021-05-07 20:01:40 | 12:09:47 | smithi | master | centos | 8.3 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-delay msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8} thrashers/none thrashosds-health workloads/dedup-io-mixed} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 6104551 | 2021-05-07 07:52:03 | 2021-05-07 07:52:03 | 2021-05-07 08:28:39 | 0:36:36 | 0:25:04 | 0:11:32 | smithi | master | centos | 8.2 | rados/cephadm/thrash/{0-distro/centos_8.2_kubic_stable 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async-v2only root} | 2 | |
Failure Reason:
timeout expired in wait_until_healthy |
||||||||||||||
fail | 6104552 | 2021-05-07 07:41:52 | 2021-05-07 07:53:34 | 2021-05-07 08:12:59 | 0:19:25 | 0:11:28 | 0:07:57 | smithi | master | rhel | 8.3 | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{rhel_8} tasks/rados_api_tests} | 2 | |
Failure Reason:
Command failed on smithi172 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f --cluster ceph -i 6 2>> /var/log/ceph/ceph-osd.6.log' |
||||||||||||||
fail | 6104553 | 2021-05-07 07:41:53 | 2021-05-07 07:53:55 | 2021-05-07 08:19:07 | 0:25:12 | 0:19:15 | 0:05:57 | smithi | master | rhel | 8.3 | rados/singleton/{all/admin-socket mon_election/classic msgr-failures/many msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{rhel_8}} | 1 |