User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
rzarzynski | 2021-05-19 21:28:31 | 2021-05-20 02:42:33 | 2021-05-20 15:01:38 | 12:19:05 | rados | master | smithi | ed8c0af | 3 | 20 | 4 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 6124059 | 2021-05-19 21:29:05 | 2021-05-20 02:41:52 | 2021-05-20 03:13:33 | 0:31:41 | 0:20:09 | 0:11:32 | smithi | master | centos | 8.3 | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-lz4 rados tasks/mon_recovery validater/lockdep} | 2 | |
Failure Reason:
Command failed on smithi072 with status 124: "sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph log '1 of 30'" |
||||||||||||||
fail | 6124060 | 2021-05-19 21:29:05 | 2021-05-20 02:41:52 | 2021-05-20 03:45:30 | 1:03:38 | 0:56:00 | 0:07:38 | smithi | master | rhel | 8.3 | rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8} thrashers/force-sync-many workloads/rados_mon_workunits} | 2 | |
Failure Reason:
wait_for_clean: failed before timeout expired |
||||||||||||||
pass | 6124061 | 2021-05-19 21:29:06 | 2021-05-20 02:42:13 | 2021-05-20 03:02:16 | 0:20:03 | 0:10:17 | 0:09:46 | smithi | master | centos | 8.3 | rados/multimon/{clusters/6 mon_election/classic msgr-failures/few msgr/async-v2only no_pools objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8} tasks/mon_clock_with_skews} | 2 | |
fail | 6124062 | 2021-05-19 21:29:07 | 2021-05-20 02:42:33 | 2021-05-20 03:06:47 | 0:24:14 | 0:18:22 | 0:05:52 | smithi | master | rhel | 8.3 | rados/singleton/{all/ec-lost-unfound mon_election/classic msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{rhel_8}} | 1 | |
Failure Reason:
Command failed on smithi013 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f --cluster ceph -i 3 2>> /var/log/ceph/ceph-osd.3.log' |
||||||||||||||
fail | 6124063 | 2021-05-19 21:29:07 | 2021-05-20 02:42:34 | 2021-05-20 03:19:45 | 0:37:11 | 0:27:32 | 0:09:39 | smithi | master | centos | 8.2 | rados/cephadm/thrash/{0-distro/centos_8.2_kubic_stable 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async-v2only root} | 2 | |
Failure Reason:
timeout expired in wait_for_all_osds_up |
||||||||||||||
fail | 6124064 | 2021-05-19 21:29:08 | 2021-05-20 02:43:04 | 2021-05-20 04:01:22 | 1:18:18 | 1:07:05 | 0:11:13 | smithi | master | centos | 8.3 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-active-recovery} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/radosbench-high-concurrency} | 2 | |
Failure Reason:
reached maximum tries (500) after waiting for 3000 seconds |
||||||||||||||
fail | 6124065 | 2021-05-19 21:29:09 | 2021-05-20 02:44:15 | 2021-05-20 06:07:37 | 3:23:22 | 3:16:08 | 0:07:14 | smithi | master | rhel | 8.3 | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{rhel_8} tasks/rados_stress_watch} | 2 | |
Failure Reason:
Command failed (workunit test rados/stress_watch.sh) on smithi053 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=ed8c0af9febc556f774e138c0e55c9a59680edf4 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/stress_watch.sh' |
||||||||||||||
fail | 6124066 | 2021-05-19 21:29:10 | 2021-05-20 02:44:46 | 2021-05-20 03:06:08 | 0:21:22 | 0:11:50 | 0:09:32 | smithi | master | centos | 8.3 | rados/singleton/{all/max-pg-per-osd.from-replica mon_election/classic msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8}} | 1 | |
pass | 6124067 | 2021-05-19 21:29:10 | 2021-05-20 02:45:06 | 2021-05-20 03:08:15 | 0:23:09 | 0:12:57 | 0:10:12 | smithi | master | centos | 8.3 | rados/monthrash/{ceph clusters/9-mons mon_election/connectivity msgr-failures/mon-delay msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8} thrashers/sync-many workloads/rados_5925} | 2 | |
fail | 6124068 | 2021-05-19 21:29:11 | 2021-05-20 02:45:06 | 2021-05-20 03:28:34 | 0:43:28 | 0:36:23 | 0:07:05 | smithi | master | rhel | 8.3 | rados/cephadm/with-work/{0-distro/rhel_8.3_kubic_stable fixed-2 mode/root mon_election/connectivity msgr/async-v2only start tasks/rados_api_tests} | 2 | |
Failure Reason:
timeout expired in wait_for_all_osds_up |
||||||||||||||
fail | 6124069 | 2021-05-19 21:29:12 | 2021-05-20 02:45:47 | 2021-05-20 03:41:02 | 0:55:15 | 0:44:35 | 0:10:40 | smithi | master | centos | 8.2 | rados/cephadm/thrash/{0-distro/centos_8.2_kubic_stable 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async-v2only root} | 2 | |
Failure Reason:
wait_for_clean: failed before timeout expired |
||||||||||||||
dead | 6124070 | 2021-05-19 21:29:12 | 2021-05-20 02:45:47 | 2021-05-20 14:54:51 | 12:09:04 | smithi | master | centos | 8.3 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-active-recovery} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-dispatch-delay msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/small-objects-balanced} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 6124071 | 2021-05-19 21:29:13 | 2021-05-20 02:45:58 | 2021-05-20 03:10:54 | 0:24:56 | 0:18:15 | 0:06:41 | smithi | master | rhel | 8.3 | rados/singleton/{all/osd-recovery-incomplete mon_election/classic msgr-failures/many msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{rhel_8}} | 1 | |
Failure Reason:
Command failed on smithi064 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f --cluster ceph -i 2 2>> /var/log/ceph/ceph-osd.2.log' |
||||||||||||||
fail | 6124072 | 2021-05-19 21:29:14 | 2021-05-20 02:46:08 | 2021-05-20 06:08:40 | 3:22:32 | 3:11:44 | 0:10:48 | smithi | master | centos | 8.3 | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8} tasks/rados_workunit_loadgen_mix} | 2 | |
Failure Reason:
Command failed (workunit test rados/load-gen-mix.sh) on smithi120 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && cd -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=ed8c0af9febc556f774e138c0e55c9a59680edf4 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="1" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.1 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.1/qa/workunits/rados/load-gen-mix.sh' |
||||||||||||||
fail | 6124073 | 2021-05-19 21:29:15 | 2021-05-20 02:47:19 | 2021-05-20 03:38:49 | 0:51:30 | 0:43:40 | 0:07:50 | smithi | master | rhel | 8.3 | rados/singleton/{all/pg-autoscaler-progress-off mon_election/connectivity msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8}} | 2 | |
Failure Reason:
Command failed (workunit test mon/pg_autoscaler.sh) on smithi061 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=ed8c0af9febc556f774e138c0e55c9a59680edf4 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/mon/pg_autoscaler.sh' |
||||||||||||||
dead | 6124074 | 2021-05-19 21:29:16 | 2021-05-20 02:47:50 | 2021-05-20 14:57:10 | 12:09:20 | smithi | master | rhel | 8.3 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-partial-recovery} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{rhel_8} thrashers/default thrashosds-health workloads/write_fadvise_dontneed} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 6124075 | 2021-05-19 21:29:16 | 2021-05-20 02:49:20 | 2021-05-20 03:22:59 | 0:33:39 | 0:22:22 | 0:11:17 | smithi | master | centos | 8.3 | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-stupid rados tasks/mon_recovery validater/lockdep} | 2 | |
Failure Reason:
Command failed on smithi109 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd dump --format=json' |
||||||||||||||
fail | 6124076 | 2021-05-19 21:29:17 | 2021-05-20 02:50:11 | 2021-05-20 03:55:02 | 1:04:51 | 0:58:59 | 0:05:52 | smithi | master | rhel | 8.3 | rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{rhel_8} thrashers/many workloads/rados_mon_workunits} | 2 | |
Failure Reason:
wait_for_clean: failed before timeout expired |
||||||||||||||
fail | 6124077 | 2021-05-19 21:29:18 | 2021-05-20 02:50:21 | 2021-05-20 03:27:36 | 0:37:15 | 0:27:36 | 0:09:39 | smithi | master | centos | 8.2 | rados/cephadm/with-work/{0-distro/centos_8.2_kubic_stable fixed-2 mode/packaged mon_election/classic msgr/async-v2only start tasks/rados_python} | 2 | |
Failure Reason:
timeout expired in wait_for_all_osds_up |
||||||||||||||
dead | 6124078 | 2021-05-19 21:29:19 | 2021-05-20 02:50:42 | 2021-05-20 14:59:21 | 12:08:39 | smithi | master | rhel | 8.3 | rados/singleton/{all/recovery-preemption mon_election/connectivity msgr-failures/many msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{rhel_8}} | 1 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 6124079 | 2021-05-19 21:29:19 | 2021-05-20 02:50:42 | 2021-05-20 06:30:42 | 3:40:00 | 3:29:47 | 0:10:13 | smithi | master | centos | 8.3 | rados/singleton/{all/test_envlibrados_for_rocksdb mon_election/classic msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8}} | 1 | |
Failure Reason:
Command failed (workunit test rados/test_envlibrados_for_rocksdb.sh) on smithi179 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=ed8c0af9febc556f774e138c0e55c9a59680edf4 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_envlibrados_for_rocksdb.sh' |
||||||||||||||
fail | 6124080 | 2021-05-19 21:29:20 | 2021-05-20 02:51:33 | 2021-05-20 06:23:11 | 3:31:38 | 3:23:31 | 0:08:07 | smithi | master | rhel | 8.3 | rados/singleton/{all/thrash-rados/{thrash-rados thrashosds-health} mon_election/connectivity msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{rhel_8}} | 2 | |
Failure Reason:
Command failed (workunit test rados/load-gen-mix-small.sh) on smithi134 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=ed8c0af9febc556f774e138c0e55c9a59680edf4 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/load-gen-mix-small.sh' |
||||||||||||||
dead | 6124081 | 2021-05-19 21:29:21 | 2021-05-20 02:52:13 | 2021-05-20 15:01:38 | 12:09:25 | smithi | master | centos | 8.3 | rados/monthrash/{ceph clusters/9-mons mon_election/connectivity msgr-failures/mon-delay msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8} thrashers/sync workloads/pool-create-delete} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 6124082 | 2021-05-19 21:29:22 | 2021-05-20 02:52:24 | 2021-05-20 03:31:04 | 0:38:40 | 0:28:25 | 0:10:15 | smithi | master | centos | 8.2 | rados/cephadm/thrash/{0-distro/centos_8.2_kubic_stable 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async-v2only root} | 2 | |
Failure Reason:
timeout expired in wait_for_all_osds_up |
||||||||||||||
pass | 6124083 | 2021-05-19 21:29:22 | 2021-05-20 02:52:24 | 2021-05-20 03:14:28 | 0:22:04 | 0:09:48 | 0:12:16 | smithi | master | centos | 8.3 | rados/multimon/{clusters/9 mon_election/connectivity msgr-failures/many msgr/async-v2only no_pools objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8} tasks/mon_clock_with_skews} | 3 | |
fail | 6124084 | 2021-05-19 21:29:23 | 2021-05-20 02:53:25 | 2021-05-20 03:24:31 | 0:31:06 | 0:23:57 | 0:07:09 | smithi | master | rhel | 8.3 | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{rhel_8} tasks/rados_api_tests} | 2 | |
Failure Reason:
reached maximum tries (90) after waiting for 540 seconds |
||||||||||||||
fail | 6124085 | 2021-05-19 21:29:24 | 2021-05-20 02:53:26 | 2021-05-20 03:29:19 | 0:35:53 | 0:26:29 | 0:09:24 | smithi | master | centos | 8.3 | rados/singleton/{all/admin-socket mon_election/classic msgr-failures/many msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{centos_8}} | 1 |