User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
rzarzynski | 2021-06-28 13:05:10 | 2021-06-28 13:25:57 | 2021-06-29 01:39:55 | 12:13:58 | rados | master | smithi | a14f60c | 4 | 15 | 5 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 6239408 | 2021-06-28 13:05:42 | 2021-06-28 13:25:57 | 2021-06-28 14:11:03 | 0:45:06 | 0:30:26 | 0:14:40 | smithi | master | centos | 8.3 | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-lz4 rados tasks/mon_recovery validater/lockdep} | 2 | |
Failure Reason:
Command failed on smithi052 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd dump --format=json' |
||||||||||||||
pass | 6239409 | 2021-06-28 13:05:43 | 2021-06-28 13:25:57 | 2021-06-28 14:25:32 | 0:59:35 | 0:49:00 | 0:10:35 | smithi | master | rhel | 8.3 | rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8} thrashers/force-sync-many workloads/rados_mon_workunits} | 2 | |
pass | 6239410 | 2021-06-28 13:05:44 | 2021-06-28 13:25:57 | 2021-06-28 13:59:24 | 0:33:27 | 0:18:07 | 0:15:20 | smithi | master | centos | 8.3 | rados/multimon/{clusters/6 mon_election/classic msgr-failures/few msgr/async-v2only no_pools objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8} tasks/mon_clock_with_skews} | 2 | |
dead | 6239411 | 2021-06-28 13:05:44 | 2021-06-28 13:25:58 | 2021-06-29 01:38:46 | 12:12:48 | smithi | master | rhel | 8.3 | rados/singleton/{all/ec-lost-unfound mon_election/classic msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{rhel_8}} | 1 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 6239412 | 2021-06-28 13:05:45 | 2021-06-28 13:25:58 | 2021-06-28 13:57:13 | 0:31:15 | 0:16:41 | 0:14:34 | smithi | master | centos | 8.2 | rados/cephadm/thrash/{0-distro/centos_8.2_kubic_stable 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async-v2only root} | 2 | |
Failure Reason:
Command failed on smithi041 with status 5: 'sudo systemctl stop ceph-a44cef6e-d818-11eb-8c1a-001a4aab830c@mon.a' |
||||||||||||||
dead | 6239413 | 2021-06-28 13:05:46 | 2021-06-28 13:25:59 | 2021-06-29 01:39:18 | 12:13:19 | smithi | master | centos | 8.3 | rados/singleton/{all/lost-unfound mon_election/connectivity msgr-failures/many msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{centos_8}} | 1 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 6239414 | 2021-06-28 13:05:46 | 2021-06-28 13:25:59 | 2021-06-28 16:59:51 | 3:33:52 | 3:18:39 | 0:15:13 | smithi | master | centos | 8.3 | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{centos_8} tasks/rados_stress_watch} | 2 | |
Failure Reason:
Command failed (workunit test rados/stress_watch.sh) on smithi150 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=1b18e07603ebfbd16486528f11d2a761732592a5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/stress_watch.sh' |
||||||||||||||
pass | 6239415 | 2021-06-28 13:05:47 | 2021-06-28 13:26:00 | 2021-06-28 13:57:25 | 0:31:25 | 0:17:37 | 0:13:48 | smithi | master | centos | 8.3 | rados/multimon/{clusters/3 mon_election/connectivity msgr-failures/many msgr/async-v2only no_pools objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8} tasks/mon_clock_with_skews} | 2 | |
fail | 6239416 | 2021-06-28 13:05:48 | 2021-06-28 13:26:00 | 2021-06-28 14:01:56 | 0:35:56 | 0:25:16 | 0:10:40 | smithi | master | rhel | 8.3 | rados/cephadm/with-work/{0-distro/rhel_8.3_kubic_stable fixed-2 mode/root mon_election/connectivity msgr/async-v2only start tasks/rados_api_tests} | 2 | |
Failure Reason:
Command failed on smithi071 with status 5: 'sudo systemctl stop ceph-48373350-d819-11eb-8c1a-001a4aab830c@mon.a' |
||||||||||||||
fail | 6239417 | 2021-06-28 13:05:48 | 2021-06-28 13:26:01 | 2021-06-28 13:58:10 | 0:32:09 | 0:15:36 | 0:16:33 | smithi | master | centos | 8.2 | rados/cephadm/thrash/{0-distro/centos_8.2_kubic_stable 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async-v2only root} | 2 | |
Failure Reason:
Command failed on smithi062 with status 5: 'sudo systemctl stop ceph-aeaced10-d818-11eb-8c1a-001a4aab830c@mon.a' |
||||||||||||||
dead | 6239418 | 2021-06-28 13:05:49 | 2021-06-28 13:26:02 | 2021-06-29 01:39:16 | 12:13:14 | smithi | master | centos | 8.3 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-active-recovery} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-dispatch-delay msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/small-objects-balanced} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 6239419 | 2021-06-28 13:05:50 | 2021-06-28 13:26:02 | 2021-06-28 14:23:55 | 0:57:53 | 0:42:40 | 0:15:13 | smithi | master | centos | 8.3 | rados/singleton/{all/osd-recovery-incomplete mon_election/classic msgr-failures/many msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{centos_8}} | 1 | |
Failure Reason:
timed out waiting for admin_socket to appear after osd.2 restart |
||||||||||||||
pass | 6239420 | 2021-06-28 13:05:50 | 2021-06-28 13:26:03 | 2021-06-28 15:00:21 | 1:34:18 | 1:22:06 | 0:12:12 | smithi | master | rhel | 8.3 | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8} tasks/rados_workunit_loadgen_mix} | 2 | |
fail | 6239421 | 2021-06-28 13:05:51 | 2021-06-28 13:26:04 | 2021-06-28 14:24:14 | 0:58:10 | 0:42:22 | 0:15:48 | smithi | master | centos | 8.3 | rados/singleton/{all/pg-autoscaler-progress-off mon_election/connectivity msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8}} | 2 | |
Failure Reason:
Command failed (workunit test mon/pg_autoscaler.sh) on smithi053 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=1b18e07603ebfbd16486528f11d2a761732592a5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/mon/pg_autoscaler.sh' |
||||||||||||||
fail | 6239422 | 2021-06-28 13:05:52 | 2021-06-28 13:26:04 | 2021-06-28 14:12:30 | 0:46:26 | 0:29:33 | 0:16:53 | smithi | master | centos | 8.3 | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-stupid rados tasks/mon_recovery validater/lockdep} | 2 | |
Failure Reason:
Command failed on smithi099 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd dump --format=json' |
||||||||||||||
dead | 6239423 | 2021-06-28 13:05:52 | 2021-06-28 13:26:05 | 2021-06-28 13:26:22 | 0:00:17 | smithi | master | centos | 8.2 | rados/cephadm/with-work/{0-distro/centos_8.2_kubic_stable fixed-2 mode/packaged mon_election/classic msgr/async-v2only start tasks/rados_python} | 2 | |||
Failure Reason:
Error reimaging machines: 501 Server Error: Not Implemented for url: http://fog.front.sepia.ceph.com/fog/host/166/task?node=schema |
||||||||||||||
fail | 6239424 | 2021-06-28 13:05:53 | 2021-06-28 13:26:05 | 2021-06-28 13:56:19 | 0:30:14 | 0:14:52 | 0:15:22 | smithi | master | centos | 8.3 | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8} tasks/repair_test} | 2 | |
fail | 6239425 | 2021-06-28 13:05:54 | 2021-06-28 13:26:06 | 2021-06-28 13:58:25 | 0:32:19 | 0:17:24 | 0:14:55 | smithi | master | centos | 8.3 | rados/singleton-bluestore/{all/cephtool mon_election/connectivity msgr-failures/none msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8}} | 1 | |
Failure Reason:
Command failed (workunit test cephtool/test.sh) on smithi160 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=1b18e07603ebfbd16486528f11d2a761732592a5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh' |
||||||||||||||
fail | 6239426 | 2021-06-28 13:05:54 | 2021-06-28 13:26:06 | 2021-06-28 17:06:24 | 3:40:18 | 3:29:17 | 0:11:01 | smithi | master | rhel | 8.3 | rados/singleton/{all/test_envlibrados_for_rocksdb mon_election/classic msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8}} | 1 | |
Failure Reason:
Command failed (workunit test rados/test_envlibrados_for_rocksdb.sh) on smithi068 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=1b18e07603ebfbd16486528f11d2a761732592a5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_envlibrados_for_rocksdb.sh' |
||||||||||||||
fail | 6239427 | 2021-06-28 13:05:55 | 2021-06-28 13:26:07 | 2021-06-28 14:30:56 | 1:04:49 | 0:52:47 | 0:12:02 | smithi | master | rhel | 8.3 | rados/singleton/{all/thrash-rados/{thrash-rados thrashosds-health} mon_election/connectivity msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{rhel_8}} | 2 | |
Failure Reason:
Command failed on smithi017 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph tell osd.0 flush_pg_stats' |
||||||||||||||
dead | 6239428 | 2021-06-28 13:05:56 | 2021-06-28 13:26:07 | 2021-06-29 01:39:55 | 12:13:48 | smithi | master | centos | 8.3 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-delay msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8} thrashers/none thrashosds-health workloads/dedup-io-mixed} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 6239429 | 2021-06-28 13:05:56 | 2021-06-28 13:26:07 | 2021-06-28 13:58:38 | 0:32:31 | 0:15:54 | 0:16:37 | smithi | master | centos | 8.2 | rados/cephadm/thrash/{0-distro/centos_8.2_kubic_stable 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async-v2only root} | 2 | |
Failure Reason:
Command failed on smithi174 with status 5: 'sudo systemctl stop ceph-c0fa017e-d818-11eb-8c1a-001a4aab830c@mon.a' |
||||||||||||||
fail | 6239430 | 2021-06-28 13:05:57 | 2021-06-28 13:26:08 | 2021-06-28 16:59:00 | 3:32:52 | 3:21:12 | 0:11:40 | smithi | master | rhel | 8.3 | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{rhel_8} tasks/rados_api_tests} | 2 | |
Failure Reason:
Command failed (workunit test rados/test.sh) on smithi077 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=1b18e07603ebfbd16486528f11d2a761732592a5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh' |
||||||||||||||
fail | 6239431 | 2021-06-28 13:05:58 | 2021-06-28 13:26:08 | 2021-06-28 14:02:32 | 0:36:24 | 0:25:31 | 0:10:53 | smithi | master | rhel | 8.3 | rados/singleton/{all/admin-socket mon_election/classic msgr-failures/many msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{rhel_8}} | 1 |