User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
rzarzynski | 2021-05-26 12:20:26 | 2021-05-26 19:54:25 | 2021-05-27 08:13:02 | 12:18:37 | rados | master | smithi | aa1dc55 | 2 | 19 | 3 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 6136907 | 2021-05-26 12:21:01 | 2021-05-26 19:54:25 | 2021-05-26 20:39:20 | 0:44:55 | 0:35:31 | 0:09:24 | smithi | master | centos | 8.3 | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-lz4 rados tasks/mon_recovery validater/lockdep} | 2 | |
Failure Reason:
Command failed on smithi119 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph tell osd.0 flush_pg_stats' |
||||||||||||||
fail | 6136908 | 2021-05-26 12:21:02 | 2021-05-26 19:54:26 | 2021-05-26 21:00:11 | 1:05:45 | 0:58:07 | 0:07:38 | smithi | master | rhel | 8.3 | rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8} thrashers/force-sync-many workloads/rados_mon_workunits} | 2 | |
Failure Reason:
wait_for_clean: failed before timeout expired |
||||||||||||||
pass | 6136909 | 2021-05-26 12:21:03 | 2021-05-26 19:54:26 | 2021-05-26 20:16:34 | 0:22:08 | 0:10:19 | 0:11:49 | smithi | master | centos | 8.3 | rados/multimon/{clusters/6 mon_election/classic msgr-failures/few msgr/async-v2only no_pools objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8} tasks/mon_clock_with_skews} | 2 | |
dead | 6136910 | 2021-05-26 12:21:04 | 2021-05-26 19:55:27 | 2021-05-27 08:04:56 | 12:09:29 | smithi | master | rhel | 8.3 | rados/singleton/{all/ec-lost-unfound mon_election/classic msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{rhel_8}} | 1 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 6136911 | 2021-05-26 12:21:04 | 2021-05-26 19:55:47 | 2021-05-26 20:17:16 | 0:21:29 | 0:10:05 | 0:11:24 | smithi | master | centos | 8.2 | rados/cephadm/thrash/{0-distro/centos_8.2_kubic_stable 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async-v2only root} | 2 | |
Failure Reason:
Command failed on smithi002 with status 5: 'sudo systemctl stop ceph-0e0a5622-be5f-11eb-8c11-001a4aab830c@mon.a' |
||||||||||||||
fail | 6136912 | 2021-05-26 12:21:05 | 2021-05-26 19:56:08 | 2021-05-26 21:21:12 | 1:25:04 | 1:15:55 | 0:09:09 | smithi | master | centos | 8.3 | rados/singleton/{all/lost-unfound mon_election/connectivity msgr-failures/many msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{centos_8}} | 1 | |
Failure Reason:
wait_for_clean: failed before timeout expired |
||||||||||||||
fail | 6136913 | 2021-05-26 12:21:06 | 2021-05-26 19:56:08 | 2021-05-26 23:19:33 | 3:23:25 | 3:12:44 | 0:10:41 | smithi | master | centos | 8.3 | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{centos_8} tasks/rados_stress_watch} | 2 | |
Failure Reason:
Command failed (workunit test rados/stress_watch.sh) on smithi087 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=1b18e07603ebfbd16486528f11d2a761732592a5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/stress_watch.sh' |
||||||||||||||
pass | 6136914 | 2021-05-26 12:21:07 | 2021-05-26 19:56:19 | 2021-05-26 20:15:33 | 0:19:14 | 0:09:35 | 0:09:39 | smithi | master | centos | 8.3 | rados/multimon/{clusters/3 mon_election/connectivity msgr-failures/many msgr/async-v2only no_pools objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8} tasks/mon_clock_with_skews} | 2 | |
fail | 6136915 | 2021-05-26 12:21:07 | 2021-05-26 19:56:39 | 2021-05-26 20:25:17 | 0:28:38 | 0:20:06 | 0:08:32 | smithi | master | rhel | 8.3 | rados/cephadm/with-work/{0-distro/rhel_8.3_kubic_stable fixed-2 mode/root mon_election/connectivity msgr/async-v2only start tasks/rados_api_tests} | 2 | |
Failure Reason:
Command failed on smithi110 with status 5: 'sudo systemctl stop ceph-3bcddfc4-be60-11eb-8c11-001a4aab830c@mon.a' |
||||||||||||||
fail | 6136916 | 2021-05-26 12:21:08 | 2021-05-26 19:57:50 | 2021-05-26 20:18:13 | 0:20:23 | 0:09:47 | 0:10:36 | smithi | master | centos | 8.2 | rados/cephadm/thrash/{0-distro/centos_8.2_kubic_stable 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async-v2only root} | 2 | |
Failure Reason:
Command failed on smithi104 with status 5: 'sudo systemctl stop ceph-69dd0558-be5f-11eb-8c11-001a4aab830c@mon.a' |
||||||||||||||
dead | 6136917 | 2021-05-26 12:21:09 | 2021-05-26 19:59:11 | 2021-05-27 08:07:56 | 12:08:45 | smithi | master | centos | 8.3 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-active-recovery} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-dispatch-delay msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/small-objects-balanced} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 6136918 | 2021-05-26 12:21:10 | 2021-05-26 19:59:11 | 2021-05-26 20:26:02 | 0:26:51 | 0:17:25 | 0:09:26 | smithi | master | centos | 8.3 | rados/singleton/{all/osd-recovery-incomplete mon_election/classic msgr-failures/many msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{centos_8}} | 1 | |
Failure Reason:
Command failed on smithi005 with status 6: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph tell osd.3 flush_pg_stats' |
||||||||||||||
fail | 6136919 | 2021-05-26 12:21:10 | 2021-05-26 19:59:42 | 2021-05-26 23:23:28 | 3:23:46 | 3:15:44 | 0:08:02 | smithi | master | rhel | 8.3 | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8} tasks/rados_workunit_loadgen_mix} | 2 | |
Failure Reason:
Command failed (workunit test rados/load-gen-mix.sh) on smithi043 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && cd -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=1b18e07603ebfbd16486528f11d2a761732592a5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="1" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.1 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.1/qa/workunits/rados/load-gen-mix.sh' |
||||||||||||||
fail | 6136920 | 2021-05-26 12:21:11 | 2021-05-26 20:00:22 | 2021-05-26 20:20:36 | 0:20:14 | 0:10:05 | 0:10:09 | smithi | master | centos | 8.3 | rados/singleton/{all/pg-autoscaler-progress-off mon_election/connectivity msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8}} | 2 | |
Failure Reason:
Command failed on smithi050 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f --cluster ceph -i 1 2>> /var/log/ceph/ceph-osd.1.log' |
||||||||||||||
fail | 6136921 | 2021-05-26 12:21:12 | 2021-05-26 20:01:43 | 2021-05-26 20:33:15 | 0:31:32 | 0:21:28 | 0:10:04 | smithi | master | centos | 8.3 | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-stupid rados tasks/mon_recovery validater/lockdep} | 2 | |
Failure Reason:
Command failed on smithi013 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd dump --format=json' |
||||||||||||||
fail | 6136922 | 2021-05-26 12:21:13 | 2021-05-26 20:02:03 | 2021-05-26 20:21:09 | 0:19:06 | 0:09:59 | 0:09:07 | smithi | master | centos | 8.2 | rados/cephadm/with-work/{0-distro/centos_8.2_kubic_stable fixed-2 mode/packaged mon_election/classic msgr/async-v2only start tasks/rados_python} | 2 | |
Failure Reason:
Command failed on smithi019 with status 5: 'sudo systemctl stop ceph-d87ea188-be5f-11eb-8c11-001a4aab830c@mon.a' |
||||||||||||||
fail | 6136923 | 2021-05-26 12:21:13 | 2021-05-26 20:02:04 | 2021-05-26 20:23:06 | 0:21:02 | 0:10:23 | 0:10:39 | smithi | master | centos | 8.3 | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8} tasks/repair_test} | 2 | |
fail | 6136924 | 2021-05-26 12:21:14 | 2021-05-26 20:02:25 | 2021-05-26 20:23:19 | 0:20:54 | 0:11:57 | 0:08:57 | smithi | master | centos | 8.3 | rados/singleton-bluestore/{all/cephtool mon_election/connectivity msgr-failures/none msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8}} | 1 | |
Failure Reason:
Command failed (workunit test cephtool/test.sh) on smithi046 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=1b18e07603ebfbd16486528f11d2a761732592a5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh' |
||||||||||||||
fail | 6136925 | 2021-05-26 12:21:15 | 2021-05-26 20:02:27 | 2021-05-26 21:07:53 | 1:05:26 | 0:58:22 | 0:07:04 | smithi | master | rhel | 8.3 | rados/singleton/{all/test_envlibrados_for_rocksdb mon_election/classic msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8}} | 1 | |
Failure Reason:
"2021-05-26T20:26:46.200165+0000 mon.a (mon.0) 120 : cluster [WRN] Health check failed: Reduced data availability: 4 pgs inactive, 2 pgs peering (PG_AVAILABILITY)" in cluster log |
||||||||||||||
fail | 6136926 | 2021-05-26 12:21:15 | 2021-05-26 20:02:57 | 2021-05-26 23:32:54 | 3:29:57 | 3:22:51 | 0:07:06 | smithi | master | rhel | 8.3 | rados/singleton/{all/thrash-rados/{thrash-rados thrashosds-health} mon_election/connectivity msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{rhel_8}} | 2 | |
Failure Reason:
Command failed (workunit test rados/load-gen-mix-small.sh) on smithi122 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=1b18e07603ebfbd16486528f11d2a761732592a5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/load-gen-mix-small.sh' |
||||||||||||||
dead | 6136927 | 2021-05-26 12:21:16 | 2021-05-26 20:03:48 | 2021-05-27 08:13:02 | 12:09:14 | smithi | master | centos | 8.3 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-delay msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8} thrashers/none thrashosds-health workloads/dedup-io-mixed} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 6136928 | 2021-05-26 12:21:17 | 2021-05-26 20:04:08 | 2021-05-26 20:23:52 | 0:19:44 | 0:10:04 | 0:09:40 | smithi | master | centos | 8.2 | rados/cephadm/thrash/{0-distro/centos_8.2_kubic_stable 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async-v2only root} | 2 | |
Failure Reason:
Command failed on smithi040 with status 5: 'sudo systemctl stop ceph-379b5f12-be60-11eb-8c11-001a4aab830c@mon.a' |
||||||||||||||
fail | 6136929 | 2021-05-26 12:21:18 | 2021-05-26 20:04:59 | 2021-05-26 23:26:54 | 3:21:55 | 3:15:25 | 0:06:30 | smithi | master | rhel | 8.3 | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{rhel_8} tasks/rados_api_tests} | 2 | |
Failure Reason:
Command failed (workunit test rados/test.sh) on smithi067 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=1b18e07603ebfbd16486528f11d2a761732592a5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh' |
||||||||||||||
fail | 6136930 | 2021-05-26 12:21:18 | 2021-05-26 20:05:39 | 2021-05-26 20:32:46 | 0:27:07 | 0:19:58 | 0:07:09 | smithi | master | rhel | 8.3 | rados/singleton/{all/admin-socket mon_election/classic msgr-failures/many msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{rhel_8}} | 1 |