User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
rzarzynski | 2021-09-27 22:54:01 | 2021-09-27 22:55:44 | 2021-09-28 11:12:59 | 12:17:15 | rados | master | smithi | 99f1f0f | 2 | 19 | 3 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 6410135 | 2021-09-27 22:54:47 | 2021-09-27 22:55:43 | 2021-09-27 23:45:38 | 0:49:55 | 0:38:29 | 0:11:26 | smithi | master | centos | 8.3 | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-lz4 rados tasks/mon_recovery validater/lockdep} | 2 | |
Failure Reason:
Command failed on smithi039 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph tell osd.5 flush_pg_stats' |
||||||||||||||
fail | 6410136 | 2021-09-27 22:54:48 | 2021-09-27 22:55:44 | 2021-09-27 23:14:42 | 0:18:58 | 0:12:19 | 0:06:39 | smithi | master | rhel | 8.3 | rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8} thrashers/force-sync-many workloads/rados_mon_workunits} | 2 | |
Failure Reason:
Command failed on smithi205 with status 1: 'sudo yum -y install ceph-mgr-cephadm' |
||||||||||||||
pass | 6410137 | 2021-09-27 22:54:49 | 2021-09-27 22:55:44 | 2021-09-27 23:16:25 | 0:20:41 | 0:10:55 | 0:09:46 | smithi | master | centos | 8.3 | rados/multimon/{clusters/6 mon_election/classic msgr-failures/few msgr/async-v2only no_pools objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8} tasks/mon_clock_with_skews} | 2 | |
fail | 6410138 | 2021-09-27 22:54:50 | 2021-09-27 22:55:45 | 2021-09-27 23:15:33 | 0:19:48 | 0:12:24 | 0:07:24 | smithi | master | rhel | 8.3 | rados/singleton/{all/ec-lost-unfound mon_election/classic msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{rhel_8}} | 1 | |
Failure Reason:
Command failed on smithi146 with status 1: 'sudo yum -y install ceph-mgr-cephadm' |
||||||||||||||
fail | 6410139 | 2021-09-27 22:54:50 | 2021-09-27 22:56:16 | 2021-09-27 23:19:10 | 0:22:54 | 0:12:22 | 0:10:32 | smithi | master | centos | 8.2 | rados/cephadm/thrash/{0-distro/centos_8.2_kubic_stable 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async-v2only root} | 2 | |
Failure Reason:
Command failed on smithi057 with status 5: 'sudo systemctl stop ceph-d632faf6-1fe8-11ec-8c25-001a4aab830c@mon.a' |
||||||||||||||
dead | 6410140 | 2021-09-27 22:54:51 | 2021-09-27 22:56:16 | 2021-09-28 11:04:43 | 12:08:27 | smithi | master | centos | 8.3 | rados/singleton/{all/lost-unfound mon_election/connectivity msgr-failures/many msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{centos_8}} | 1 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 6410141 | 2021-09-27 22:54:52 | 2021-09-27 22:56:37 | 2021-09-28 00:50:01 | 1:53:24 | 1:42:25 | 0:10:59 | smithi | master | centos | 8.3 | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{centos_8} tasks/rados_stress_watch} | 2 | |
Failure Reason:
Command failed (workunit test rados/stress_watch.sh) on smithi091 with status 134: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=1b18e07603ebfbd16486528f11d2a761732592a5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/stress_watch.sh' |
||||||||||||||
pass | 6410142 | 2021-09-27 22:54:53 | 2021-09-27 22:56:47 | 2021-09-27 23:17:10 | 0:20:23 | 0:10:53 | 0:09:30 | smithi | master | centos | 8.3 | rados/multimon/{clusters/3 mon_election/connectivity msgr-failures/many msgr/async-v2only no_pools objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8} tasks/mon_clock_with_skews} | 2 | |
fail | 6410143 | 2021-09-27 22:54:54 | 2021-09-27 22:56:48 | 2021-09-27 23:20:01 | 0:23:13 | 0:15:32 | 0:07:41 | smithi | master | rhel | 8.3 | rados/cephadm/with-work/{0-distro/rhel_8.3_kubic_stable fixed-2 mode/root mon_election/connectivity msgr/async-v2only start tasks/rados_api_tests} | 2 | |
Failure Reason:
Command failed on smithi063 with status 1: 'sudo yum -y install ceph-mgr-cephadm' |
||||||||||||||
fail | 6410144 | 2021-09-27 22:54:54 | 2021-09-27 22:57:08 | 2021-09-27 23:18:22 | 0:21:14 | 0:11:47 | 0:09:27 | smithi | master | centos | 8.2 | rados/cephadm/thrash/{0-distro/centos_8.2_kubic_stable 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async-v2only root} | 2 | |
Failure Reason:
Command failed on smithi081 with status 5: 'sudo systemctl stop ceph-eb6c83c4-1fe8-11ec-8c25-001a4aab830c@mon.a' |
||||||||||||||
dead | 6410145 | 2021-09-27 22:54:55 | 2021-09-27 22:57:19 | 2021-09-28 11:06:02 | 12:08:43 | smithi | master | centos | 8.3 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-active-recovery} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-dispatch-delay msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/small-objects-balanced} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 6410146 | 2021-09-27 22:54:56 | 2021-09-27 22:57:19 | 2021-09-27 23:44:36 | 0:47:17 | 0:38:11 | 0:09:06 | smithi | master | centos | 8.3 | rados/singleton/{all/osd-recovery-incomplete mon_election/classic msgr-failures/many msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{centos_8}} | 1 | |
Failure Reason:
timed out waiting for admin_socket to appear after osd.2 restart |
||||||||||||||
fail | 6410147 | 2021-09-27 22:54:57 | 2021-09-27 22:57:40 | 2021-09-27 23:12:32 | 0:14:52 | 0:06:18 | 0:08:34 | smithi | master | rhel | 8.3 | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8} tasks/rados_workunit_loadgen_mix} | 2 | |
Failure Reason:
Command failed on smithi176 with status 1: 'sudo yum -y install ceph-radosgw ceph-test ceph ceph-base cephadm ceph-immutable-object-cache ceph-mgr ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-rook ceph-mgr-cephadm ceph-fuse librados-devel libcephfs2 libcephfs-devel librados2 librbd1 python3-rados python3-rgw python3-cephfs python3-rbd rbd-fuse rbd-mirror rbd-nbd sqlite-devel sqlite-devel sqlite-devel sqlite-devel' |
||||||||||||||
fail | 6410148 | 2021-09-27 22:54:58 | 2021-09-27 22:59:31 | 2021-09-27 23:22:58 | 0:23:27 | 0:12:43 | 0:10:44 | smithi | master | centos | 8.3 | rados/singleton/{all/pg-autoscaler-progress-off mon_election/connectivity msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8}} | 2 | |
Failure Reason:
Command failed (workunit test mon/pg_autoscaler.sh) on smithi051 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=1b18e07603ebfbd16486528f11d2a761732592a5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/mon/pg_autoscaler.sh' |
||||||||||||||
fail | 6410149 | 2021-09-27 22:54:59 | 2021-09-27 23:00:02 | 2021-09-27 23:35:51 | 0:35:49 | 0:23:41 | 0:12:08 | smithi | master | centos | 8.3 | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-stupid rados tasks/mon_recovery validater/lockdep} | 2 | |
Failure Reason:
Command failed on smithi025 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph tell osd.6 flush_pg_stats' |
||||||||||||||
fail | 6410150 | 2021-09-27 22:54:59 | 2021-09-27 23:00:23 | 2021-09-27 23:21:46 | 0:21:23 | 0:11:32 | 0:09:51 | smithi | master | centos | 8.2 | rados/cephadm/with-work/{0-distro/centos_8.2_kubic_stable fixed-2 mode/packaged mon_election/classic msgr/async-v2only start tasks/rados_python} | 2 | |
Failure Reason:
Command failed on smithi183 with status 5: 'sudo systemctl stop ceph-54bbc1aa-1fe9-11ec-8c25-001a4aab830c@mon.a' |
||||||||||||||
fail | 6410151 | 2021-09-27 22:55:00 | 2021-09-27 23:00:43 | 2021-09-27 23:22:22 | 0:21:39 | 0:10:37 | 0:11:02 | smithi | master | centos | 8.3 | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8} tasks/repair_test} | 2 | |
fail | 6410152 | 2021-09-27 22:55:01 | 2021-09-27 23:01:04 | 2021-09-27 23:22:09 | 0:21:05 | 0:11:56 | 0:09:09 | smithi | master | centos | 8.3 | rados/singleton-bluestore/{all/cephtool mon_election/connectivity msgr-failures/none msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8}} | 1 | |
Failure Reason:
Command failed (workunit test cephtool/test.sh) on smithi194 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=1b18e07603ebfbd16486528f11d2a761732592a5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh' |
||||||||||||||
fail | 6410153 | 2021-09-27 22:55:02 | 2021-09-27 23:01:05 | 2021-09-27 23:21:34 | 0:20:29 | 0:12:33 | 0:07:56 | smithi | master | rhel | 8.3 | rados/singleton/{all/test_envlibrados_for_rocksdb mon_election/classic msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8}} | 1 | |
Failure Reason:
Command failed on smithi123 with status 1: 'sudo yum -y install ceph-mgr-cephadm' |
||||||||||||||
fail | 6410154 | 2021-09-27 22:55:03 | 2021-09-27 23:02:25 | 2021-09-27 23:22:42 | 0:20:17 | 0:13:00 | 0:07:17 | smithi | master | rhel | 8.3 | rados/singleton/{all/thrash-rados/{thrash-rados thrashosds-health} mon_election/connectivity msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{rhel_8}} | 2 | |
Failure Reason:
Command failed on smithi191 with status 1: 'sudo yum -y install ceph-mgr-cephadm' |
||||||||||||||
dead | 6410155 | 2021-09-27 22:55:04 | 2021-09-27 23:03:16 | 2021-09-28 11:12:59 | 12:09:43 | smithi | master | centos | 8.3 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-delay msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8} thrashers/none thrashosds-health workloads/dedup-io-mixed} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 6410156 | 2021-09-27 22:55:04 | 2021-09-27 23:03:37 | 2021-09-27 23:25:52 | 0:22:15 | 0:11:01 | 0:11:14 | smithi | master | centos | 8.2 | rados/cephadm/thrash/{0-distro/centos_8.2_kubic_stable 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async-v2only root} | 2 | |
Failure Reason:
Command failed on smithi106 with status 5: 'sudo systemctl stop ceph-d3a95afe-1fe9-11ec-8c25-001a4aab830c@mon.a' |
||||||||||||||
fail | 6410157 | 2021-09-27 22:55:05 | 2021-09-27 23:04:58 | 2021-09-27 23:17:52 | 0:12:54 | 0:06:26 | 0:06:28 | smithi | master | rhel | 8.3 | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{rhel_8} tasks/rados_api_tests} | 2 | |
Failure Reason:
Command failed on smithi110 with status 1: 'sudo yum -y install ceph-radosgw ceph-test ceph ceph-base cephadm ceph-immutable-object-cache ceph-mgr ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-rook ceph-mgr-cephadm ceph-fuse librados-devel libcephfs2 libcephfs-devel librados2 librbd1 python3-rados python3-rgw python3-cephfs python3-rbd rbd-fuse rbd-mirror rbd-nbd sqlite-devel sqlite-devel sqlite-devel sqlite-devel' |
||||||||||||||
fail | 6410158 | 2021-09-27 22:55:06 | 2021-09-27 23:04:59 | 2021-09-27 23:23:42 | 0:18:43 | 0:12:37 | 0:06:06 | smithi | master | rhel | 8.3 | rados/singleton/{all/admin-socket mon_election/classic msgr-failures/many msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{rhel_8}} | 1 | |
Failure Reason:
Command failed on smithi041 with status 1: 'sudo yum -y install ceph-mgr-cephadm' |