User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
rzarzynski | 2022-07-13 09:14:45 | 2022-07-13 19:02:34 | 2022-07-14 07:11:39 | 12:09:05 | rados | main | smithi | 1647216 | 7 | 16 | 1 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
pass | 6928784 | 2022-07-13 09:15:17 | 2022-07-13 19:02:34 | 2022-07-13 19:45:06 | 0:42:32 | 0:34:05 | 0:08:27 | smithi | main | centos | 8.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-lz4 rados tasks/mon_recovery validater/lockdep} | 2 | |
fail | 6928785 | 2022-07-13 09:15:18 | 2022-07-13 19:02:34 | 2022-07-13 20:07:05 | 1:04:31 | 0:55:34 | 0:08:57 | smithi | main | centos | 8.stream | rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8} thrashers/force-sync-many workloads/rados_mon_workunits} | 2 | |
Failure Reason:
"2022-07-13T19:57:57.371670+0000 mon.a (mon.0) 1870 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
fail | 6928786 | 2022-07-13 09:15:19 | 2022-07-13 19:02:34 | 2022-07-13 22:49:24 | 3:46:50 | 3:40:08 | 0:06:42 | smithi | main | centos | 8.stream | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8} tasks/rados_api_tests} | 2 | |
Failure Reason:
Command failed (workunit test rados/test.sh) on smithi038 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=8e7f49c256f8f4423de0179cd5ade14f6f211bd5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh' |
||||||||||||||
pass | 6928787 | 2022-07-13 09:15:21 | 2022-07-13 19:02:35 | 2022-07-13 19:37:59 | 0:35:24 | 0:28:57 | 0:06:27 | smithi | main | rhel | 8.5 | rados/multimon/{clusters/6 mon_election/classic msgr-failures/few msgr/async-v2only no_pools objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8} tasks/mon_clock_with_skews} | 2 | |
fail | 6928788 | 2022-07-13 09:15:22 | 2022-07-13 19:02:35 | 2022-07-13 20:31:38 | 1:29:03 | 1:22:39 | 0:06:24 | smithi | main | centos | 8.stream | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/radosbench-high-concurrency} | 2 | |
Failure Reason:
reached maximum tries (500) after waiting for 3000 seconds |
||||||||||||||
fail | 6928789 | 2022-07-13 09:15:23 | 2022-07-13 19:02:35 | 2022-07-13 22:39:31 | 3:36:56 | 3:27:56 | 0:09:00 | smithi | main | centos | 8.stream | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{centos_8} tasks/rados_stress_watch} | 2 | |
Failure Reason:
Command failed (workunit test rados/stress_watch.sh) on smithi090 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=8e7f49c256f8f4423de0179cd5ade14f6f211bd5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/stress_watch.sh' |
||||||||||||||
fail | 6928790 | 2022-07-13 09:15:24 | 2022-07-13 19:02:36 | 2022-07-13 19:25:45 | 0:23:09 | 0:17:18 | 0:05:51 | smithi | main | rhel | 8.5 | rados/singleton/{all/max-pg-per-osd.from-mon mon_election/connectivity msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{rhel_8}} | 1 | |
pass | 6928791 | 2022-07-13 09:15:25 | 2022-07-13 19:02:36 | 2022-07-13 19:43:00 | 0:40:24 | 0:33:00 | 0:07:24 | smithi | main | rhel | 8.5 | rados/monthrash/{ceph clusters/9-mons mon_election/connectivity msgr-failures/mon-delay msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{rhel_8} thrashers/sync-many workloads/rados_5925} | 2 | |
fail | 6928792 | 2022-07-13 09:15:26 | 2022-07-13 19:02:36 | 2022-07-13 19:37:59 | 0:35:23 | 0:28:41 | 0:06:42 | smithi | main | centos | 8.stream | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/fastclose msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8} thrashers/none thrashosds-health workloads/redirect_promote_tests} | 2 | |
Failure Reason:
Command crashed: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --set_redirect --low_tier_pool low_tier --max-ops 4000 --objects 500 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op read 50 --op write 50 --op delete 10 --op tier_promote 30 --op write_excl 50 --pool unique_pool_0' |
||||||||||||||
fail | 6928793 | 2022-07-13 09:15:27 | 2022-07-13 19:02:37 | 2022-07-13 19:35:51 | 0:33:14 | 0:27:04 | 0:06:10 | smithi | main | centos | 8.stream | rados/multimon/{clusters/3 mon_election/connectivity msgr-failures/many msgr/async-v2only no_pools objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8} tasks/mon_clock_with_skews} | 2 | |
Failure Reason:
SELinux denials found on ubuntu@smithi100.front.sepia.ceph.com: ['type=AVC msg=audit(1657740064.315:16573): avc: denied { open } for pid=1000 comm="sssd_be" path="/etc/resolv.conf" dev="sda1" ino=262271 scontext=system_u:system_r:sssd_t:s0 tcontext=unconfined_u:object_r:user_home_t:s0 tclass=file permissive=1'] |
||||||||||||||
fail | 6928794 | 2022-07-13 09:15:28 | 2022-07-13 19:02:37 | 2022-07-13 19:35:42 | 0:33:05 | 0:25:24 | 0:07:41 | smithi | main | centos | 8.stream | rados/singleton/{all/mon-auth-caps mon_election/classic msgr-failures/many msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{centos_8}} | 1 | |
Failure Reason:
reached maximum tries (90) after waiting for 540 seconds |
||||||||||||||
fail | 6928795 | 2022-07-13 09:15:30 | 2022-07-13 19:02:38 | 2022-07-13 19:40:41 | 0:38:03 | 0:31:10 | 0:06:53 | smithi | main | rhel | 8.5 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{default} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{rhel_8} thrashers/default thrashosds-health workloads/write_fadvise_dontneed} | 2 | |
Failure Reason:
Command crashed: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --write-fadvise-dontneed --max-ops 4000 --objects 500 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op read 100 --op write 50 --op delete 10 --op write_excl 50 --pool unique_pool_0' |
||||||||||||||
pass | 6928796 | 2022-07-13 09:15:31 | 2022-07-13 19:02:38 | 2022-07-13 19:44:26 | 0:41:48 | 0:34:15 | 0:07:33 | smithi | main | centos | 8.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-stupid rados tasks/mon_recovery validater/lockdep} | 2 | |
pass | 6928797 | 2022-07-13 09:15:32 | 2022-07-13 19:02:39 | 2022-07-13 19:38:31 | 0:35:52 | 0:29:47 | 0:06:05 | smithi | main | rhel | 8.5 | rados/multimon/{clusters/21 mon_election/classic msgr-failures/few msgr/async-v2only no_pools objectstore/bluestore-stupid rados supported-random-distro$/{rhel_8} tasks/mon_clock_with_skews} | 3 | |
pass | 6928798 | 2022-07-13 09:15:33 | 2022-07-13 19:02:39 | 2022-07-13 19:47:11 | 0:44:32 | 0:36:52 | 0:07:40 | smithi | main | centos | 8.stream | rados/singleton/{all/random-eio mon_election/classic msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8}} | 2 | |
fail | 6928799 | 2022-07-13 09:15:34 | 2022-07-13 19:02:39 | 2022-07-13 19:32:16 | 0:29:37 | 0:22:30 | 0:07:07 | smithi | main | rhel | 8.5 | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{rhel_8} tasks/repair_test} | 2 | |
fail | 6928800 | 2022-07-13 09:15:35 | 2022-07-13 19:02:40 | 2022-07-13 19:28:37 | 0:25:57 | 0:18:35 | 0:07:22 | smithi | main | rhel | 8.5 | rados/singleton-bluestore/{all/cephtool mon_election/connectivity msgr-failures/none msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8}} | 1 | |
Failure Reason:
Command failed (workunit test cephtool/test.sh) on smithi078 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=8e7f49c256f8f4423de0179cd5ade14f6f211bd5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh' |
||||||||||||||
fail | 6928801 | 2022-07-13 09:15:36 | 2022-07-13 19:02:40 | 2022-07-13 20:06:48 | 1:04:08 | 0:56:00 | 0:08:08 | smithi | main | rhel | 8.5 | rados/singleton/{all/resolve_stuck_peering mon_election/connectivity msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{rhel_8}} | 2 | |
Failure Reason:
timed out waiting for admin_socket to appear after osd.0 restart |
||||||||||||||
fail | 6928802 | 2022-07-13 09:15:37 | 2022-07-13 19:02:40 | 2022-07-13 22:28:41 | 3:26:01 | 3:19:33 | 0:06:28 | smithi | main | centos | 8.stream | rados/singleton/{all/test_envlibrados_for_rocksdb mon_election/classic msgr-failures/many msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{centos_8}} | 1 | |
Failure Reason:
Command failed (workunit test rados/test_envlibrados_for_rocksdb.sh) on smithi120 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=8e7f49c256f8f4423de0179cd5ade14f6f211bd5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_envlibrados_for_rocksdb.sh' |
||||||||||||||
fail | 6928803 | 2022-07-13 09:15:38 | 2022-07-13 19:02:41 | 2022-07-13 19:57:42 | 0:55:01 | 0:47:51 | 0:07:10 | smithi | main | rhel | 8.5 | rados/singleton/{all/thrash-rados/{thrash-rados thrashosds-health} mon_election/connectivity msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8}} | 2 | |
Failure Reason:
Command failed on smithi077 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph tell osd.1 flush_pg_stats' |
||||||||||||||
dead | 6928804 | 2022-07-13 09:15:39 | 2022-07-13 19:02:41 | 2022-07-14 07:11:39 | 12:08:58 | smithi | main | rhel | 8.5 | rados/monthrash/{ceph clusters/9-mons mon_election/connectivity msgr-failures/mon-delay msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8} thrashers/sync workloads/pool-create-delete} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6928805 | 2022-07-13 09:15:40 | 2022-07-13 19:02:41 | 2022-07-13 19:38:40 | 0:35:59 | 0:28:29 | 0:07:30 | smithi | main | centos | 8.stream | rados/multimon/{clusters/9 mon_election/connectivity msgr-failures/many msgr/async-v2only no_pools objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8} tasks/mon_clock_with_skews} | 3 | |
fail | 6928806 | 2022-07-13 09:15:42 | 2022-07-13 19:02:42 | 2022-07-13 22:51:50 | 3:49:08 | 3:42:00 | 0:07:08 | smithi | main | centos | 8.stream | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{centos_8} tasks/rados_api_tests} | 2 | |
Failure Reason:
Command failed (workunit test rados/test.sh) on smithi116 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=8e7f49c256f8f4423de0179cd5ade14f6f211bd5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh' |
||||||||||||||
fail | 6928807 | 2022-07-13 09:15:43 | 2022-07-13 19:02:42 | 2022-07-13 19:29:50 | 0:27:08 | 0:19:42 | 0:07:26 | smithi | main | rhel | 8.5 | rados/singleton/{all/admin-socket mon_election/classic msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{rhel_8}} | 1 | |
Failure Reason:
Command failed on smithi062 with status 22: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph --admin-daemon /var/run/ceph/ceph-osd.0.asok perf dump' |