User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
lflores | 2023-06-06 17:48:06 | 2023-06-06 17:50:05 | 2023-06-07 06:00:27 | 12:10:22 | rados | wip-yuri7-testing-2023-06-05-1505 | smithi | 65cebea | 11 | 5 | 2 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
dead | 7296933 | 2023-06-06 17:49:17 | 2023-06-06 17:50:03 | 2023-06-07 06:00:27 | 12:10:24 | smithi | main | ubuntu | 20.04 | rados/singleton-nomsgr/{all/admin_socket_output mon_election/classic rados supported-random-distro$/{ubuntu_20.04}} | 1 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 7296934 | 2023-06-06 17:49:18 | 2023-06-06 17:50:04 | 2023-06-06 18:40:01 | 0:49:57 | 0:37:04 | 0:12:53 | smithi | main | centos | 8.stream | rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat mon_election/classic msgr-failures/osd-delay rados thrashers/careful thrashosds-health workloads/cache-snaps} | 3 | |
pass | 7296935 | 2023-06-06 17:49:19 | 2023-06-06 17:50:04 | 2023-06-06 18:50:16 | 1:00:12 | 0:49:56 | 0:10:16 | smithi | main | centos | 8.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-bitmap rados tasks/rados_api_tests validater/valgrind} | 2 | |
pass | 7296936 | 2023-06-06 17:49:20 | 2023-06-06 17:50:04 | 2023-06-06 18:28:36 | 0:38:32 | 0:25:46 | 0:12:46 | smithi | main | ubuntu | 22.04 | rados/singleton/{all/osd-recovery-incomplete mon_election/classic msgr-failures/none msgr/async-v1only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 7296937 | 2023-06-06 17:49:21 | 2023-06-06 17:50:05 | 2023-06-06 18:29:18 | 0:39:13 | 0:30:58 | 0:08:15 | smithi | main | rhel | 8.6 | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/fastclose objectstore/bluestore-comp-lz4 rados recovery-overrides/{default} supported-random-distro$/{rhel_8} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 | |
pass | 7296938 | 2023-06-06 17:49:21 | 2023-06-06 17:50:05 | 2023-06-06 18:14:56 | 0:24:51 | 0:19:24 | 0:05:27 | smithi | main | rhel | 8.6 | rados/cephadm/smoke/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} | 2 | |
fail | 7296939 | 2023-06-06 17:49:22 | 2023-06-06 17:50:05 | 2023-06-06 18:25:37 | 0:35:32 | 0:24:04 | 0:11:28 | smithi | main | centos | 8.stream | rados/dashboard/{0-single-container-host debug/mgr mon_election/connectivity random-objectstore$/{bluestore-stupid} tasks/e2e} | 2 | |
Failure Reason:
Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi114 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=65cebeadd1b8670eeab54addbd95365d0898a99f TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh' |
||||||||||||||
fail | 7296940 | 2023-06-06 17:49:23 | 2023-06-06 17:50:06 | 2023-06-06 18:23:16 | 0:33:10 | 0:21:30 | 0:11:40 | smithi | main | centos | 8.stream | rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/2-size-2-min-size 1-install/pacific backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/crush-compat mon_election/classic msgr-failures/few rados thrashers/mapgap thrashosds-health workloads/rbd_cls} | 3 | |
Failure Reason:
Command failed on smithi149 with status 1: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'ceph_test_cls_rbd --gtest_filter=-TestClsRbd.get_features:TestClsRbd.parents:TestClsRbd.mirror'" |
||||||||||||||
dead | 7296941 | 2023-06-06 17:49:24 | 2023-06-06 17:50:06 | 2023-06-07 06:00:12 | 12:10:06 | smithi | main | ubuntu | 22.04 | rados/singleton-nomsgr/{all/admin_socket_output mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} | 1 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 7296942 | 2023-06-06 17:49:24 | 2023-06-06 17:50:06 | 2023-06-06 18:13:55 | 0:23:49 | 0:12:49 | 0:11:00 | smithi | main | centos | 8.stream | rados/singleton/{all/test_envlibrados_for_rocksdb/{supported/centos_latest test_envlibrados_for_rocksdb} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_8}} | 1 | |
Failure Reason:
Command failed (workunit test rados/test_envlibrados_for_rocksdb.sh) on smithi173 with status 2: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=65cebeadd1b8670eeab54addbd95365d0898a99f TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_envlibrados_for_rocksdb.sh' |
||||||||||||||
pass | 7296943 | 2023-06-06 17:49:25 | 2023-06-06 17:50:07 | 2023-06-06 19:57:14 | 2:07:07 | 1:55:55 | 0:11:12 | smithi | main | centos | 8.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-zstd rados tasks/rados_cls_all validater/valgrind} | 2 | |
pass | 7296944 | 2023-06-06 17:49:26 | 2023-06-06 17:50:07 | 2023-06-06 20:41:10 | 2:51:03 | 2:23:57 | 0:27:06 | smithi | main | ubuntu | 20.04 | rados/objectstore/{backends/objectstore-bluestore-a supported-random-distro$/{ubuntu_20.04}} | 1 | |
fail | 7296945 | 2023-06-06 17:49:27 | 2023-06-06 17:50:08 | 2023-06-06 18:12:45 | 0:22:37 | 0:11:39 | 0:10:58 | smithi | main | ubuntu | 22.04 | rados/singleton/{all/test_envlibrados_for_rocksdb/{supported/rhel_latest test_envlibrados_for_rocksdb} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{ubuntu_latest}} | 1 | |
Failure Reason:
Command failed (workunit test rados/test_envlibrados_for_rocksdb.sh) on smithi160 with status 2: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=65cebeadd1b8670eeab54addbd95365d0898a99f TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_envlibrados_for_rocksdb.sh' |
||||||||||||||
pass | 7296946 | 2023-06-06 17:49:27 | 2023-06-06 17:50:08 | 2023-06-06 20:33:11 | 2:43:03 | 2:32:55 | 0:10:08 | smithi | main | centos | 8.stream | rados/standalone/{supported-random-distro$/{centos_8} workloads/osd-backfill} | 1 | |
fail | 7296947 | 2023-06-06 17:49:28 | 2023-06-06 17:50:08 | 2023-06-06 18:21:54 | 0:31:46 | 0:20:14 | 0:11:32 | smithi | main | centos | 8.stream | rados/dashboard/{0-single-container-host debug/mgr mon_election/classic random-objectstore$/{bluestore-comp-zstd} tasks/e2e} | 2 | |
Failure Reason:
Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi104 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=65cebeadd1b8670eeab54addbd95365d0898a99f TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh' |
||||||||||||||
pass | 7296948 | 2023-06-06 17:49:29 | 2023-06-06 17:50:09 | 2023-06-06 19:12:43 | 1:22:34 | 1:07:35 | 0:14:59 | smithi | main | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async objectstore/bluestore-stupid rados supported-random-distro$/{ubuntu_20.04} thrashers/pggrow thrashosds-health workloads/radosbench} | 2 | |
pass | 7296949 | 2023-06-06 17:49:30 | 2023-06-06 17:50:09 | 2023-06-06 18:50:48 | 1:00:39 | 0:50:15 | 0:10:24 | smithi | main | centos | 8.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-low-osd-mem-target rados tasks/rados_api_tests validater/valgrind} | 2 | |
pass | 7296950 | 2023-06-06 17:49:30 | 2023-06-06 17:50:09 | 2023-06-06 20:35:19 | 2:45:10 | 2:38:16 | 0:06:54 | smithi | main | rhel | 8.6 | rados/standalone/{supported-random-distro$/{rhel_8} workloads/osd} | 1 |