User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail |
---|---|---|---|---|---|---|---|---|---|---|
sseshasa | 2023-05-02 03:12:27 | 2023-05-02 03:51:40 | 2023-05-02 06:55:30 | 3:03:50 | rados | wip-sseshasa3-testing-2023-05-01-2154 | smithi | b5b2f91 | 13 | 12 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 7260278 | 2023-05-02 03:12:51 | 2023-05-02 03:50:49 | 2023-05-02 04:51:19 | 1:00:30 | 0:49:35 | 0:10:55 | smithi | main | ubuntu | 20.04 | rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_20.04} thrashers/many workloads/rados_mon_osdmap_prune} | 2 | |
Failure Reason:
Command failed on smithi177 with status 123: "sudo find /var/log/ceph -name '*.log' -print0 | sudo xargs -0 --no-run-if-empty -- gzip --" |
||||||||||||||
fail | 7260279 | 2023-05-02 03:12:51 | 2023-05-02 03:50:49 | 2023-05-02 04:13:21 | 0:22:32 | 0:11:44 | 0:10:48 | smithi | main | ubuntu | 22.04 | rados/singleton/{all/test_envlibrados_for_rocksdb/{supported/centos_latest test_envlibrados_for_rocksdb} mon_election/classic msgr-failures/none msgr/async-v2only objectstore/bluestore-hybrid rados supported-random-distro$/{ubuntu_latest}} | 1 | |
Failure Reason:
Command failed (workunit test rados/test_envlibrados_for_rocksdb.sh) on smithi163 with status 2: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=b5b2f910483790328da0b34d3489800416b45bd6 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_envlibrados_for_rocksdb.sh' |
||||||||||||||
fail | 7260280 | 2023-05-02 03:12:52 | 2023-05-02 03:51:09 | 2023-05-02 04:07:00 | 0:15:51 | 0:06:20 | 0:09:31 | smithi | main | ubuntu | 20.04 | rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/none cluster/1-node k8s/1.21 net/host rook/1.7.2} | 1 | |
Failure Reason:
Command failed on smithi191 with status 1: 'sudo systemctl enable --now kubelet && sudo kubeadm config images pull' |
||||||||||||||
pass | 7260281 | 2023-05-02 03:12:53 | 2023-05-02 03:51:10 | 2023-05-02 04:24:56 | 0:33:46 | 0:24:59 | 0:08:47 | smithi | main | ubuntu | 22.04 | rados/perf/{ceph mon_election/connectivity objectstore/bluestore-bitmap openstack scheduler/wpq_default_shards settings/optimized ubuntu_latest workloads/radosbench_omap_write} | 1 | |
fail | 7260282 | 2023-05-02 03:12:54 | 2023-05-02 03:51:10 | 2023-05-02 04:11:37 | 0:20:27 | 0:09:47 | 0:10:40 | smithi | main | ubuntu | 20.04 | rados/singleton/{all/test_envlibrados_for_rocksdb/{supported/rhel_latest test_envlibrados_for_rocksdb} mon_election/classic msgr-failures/none msgr/async-v2only objectstore/bluestore-comp-zlib rados supported-random-distro$/{ubuntu_20.04}} | 1 | |
Failure Reason:
Command failed (workunit test rados/test_envlibrados_for_rocksdb.sh) on smithi081 with status 2: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=b5b2f910483790328da0b34d3489800416b45bd6 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_envlibrados_for_rocksdb.sh' |
||||||||||||||
pass | 7260283 | 2023-05-02 03:12:54 | 2023-05-02 03:51:40 | 2023-05-02 04:14:43 | 0:23:03 | 0:16:54 | 0:06:09 | smithi | main | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools agent/on mon_election/connectivity task/test_orch_cli} | 1 | |
pass | 7260284 | 2023-05-02 03:12:55 | 2023-05-02 03:51:41 | 2023-05-02 04:21:25 | 0:29:44 | 0:12:42 | 0:17:02 | smithi | main | ubuntu | 20.04 | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/osd-dispatch-delay objectstore/bluestore-comp-zlib rados recovery-overrides/{default} supported-random-distro$/{ubuntu_20.04} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 | |
fail | 7260285 | 2023-05-02 03:12:56 | 2023-05-02 03:55:52 | 2023-05-02 04:39:18 | 0:43:26 | 0:35:42 | 0:07:44 | smithi | main | centos | 8.stream | rados/dashboard/{0-single-container-host debug/mgr mon_election/classic random-objectstore$/{bluestore-bitmap} tasks/e2e} | 2 | |
Failure Reason:
Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi005 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=b5b2f910483790328da0b34d3489800416b45bd6 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh' |
||||||||||||||
fail | 7260286 | 2023-05-02 03:12:57 | 2023-05-02 03:56:22 | 2023-05-02 04:15:22 | 0:19:00 | 0:06:29 | 0:12:31 | smithi | main | ubuntu | 20.04 | rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/radosbench cluster/3-node k8s/1.21 net/calico rook/master} | 3 | |
Failure Reason:
Command failed on smithi012 with status 1: 'sudo systemctl enable --now kubelet && sudo kubeadm config images pull' |
||||||||||||||
fail | 7260287 | 2023-05-02 03:12:58 | 2023-05-02 03:57:23 | 2023-05-02 04:20:14 | 0:22:51 | 0:15:03 | 0:07:48 | smithi | main | centos | 8.stream | rados/singleton-nomsgr/{all/librados_hello_world mon_election/classic rados supported-random-distro$/{centos_8}} | 1 | |
Failure Reason:
Command failed (workunit test rados/test_librados_build.sh) on smithi096 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=b5b2f910483790328da0b34d3489800416b45bd6 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_librados_build.sh' |
||||||||||||||
pass | 7260288 | 2023-05-02 03:12:59 | 2023-05-02 03:57:23 | 2023-05-02 04:30:18 | 0:32:55 | 0:27:17 | 0:05:38 | smithi | main | rhel | 8.6 | rados/singleton/{all/ec-inconsistent-hinfo mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{rhel_8}} | 1 | |
pass | 7260289 | 2023-05-02 03:12:59 | 2023-05-02 03:57:23 | 2023-05-02 04:30:50 | 0:33:27 | 0:26:52 | 0:06:35 | smithi | main | rhel | 8.6 | rados/cephadm/workunits/{0-distro/rhel_8.6_container_tools_rhel8 agent/off mon_election/classic task/test_cephadm} | 1 | |
pass | 7260290 | 2023-05-02 03:13:00 | 2023-05-02 03:57:54 | 2023-05-02 04:23:04 | 0:25:10 | 0:17:42 | 0:07:28 | smithi | main | centos | 8.stream | rados/cephadm/smoke/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop agent/off fixed-2 mon_election/connectivity start} | 2 | |
pass | 7260291 | 2023-05-02 03:13:01 | 2023-05-02 03:57:54 | 2023-05-02 06:55:30 | 2:57:36 | 2:48:14 | 0:09:22 | smithi | main | rhel | 8.6 | rados/standalone/{supported-random-distro$/{rhel_8} workloads/scrub} | 1 | |
pass | 7260292 | 2023-05-02 03:13:02 | 2023-05-02 04:02:40 | 2023-05-02 04:42:19 | 0:39:39 | 0:31:38 | 0:08:01 | smithi | main | centos | 8.stream | rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/osd-dispatch-delay rados recovery-overrides/{more-async-recovery} supported-random-distro$/{centos_8} thrashers/pggrow thrashosds-health workloads/ec-small-objects-fast-read-overwrites} | 2 | |
pass | 7260293 | 2023-05-02 03:13:03 | 2023-05-02 04:03:21 | 2023-05-02 04:55:42 | 0:52:21 | 0:40:18 | 0:12:03 | smithi | main | ubuntu | 20.04 | rados/objectstore/{backends/ceph_objectstore_tool supported-random-distro$/{ubuntu_20.04}} | 1 | |
fail | 7260294 | 2023-05-02 03:13:03 | 2023-05-02 04:05:01 | 2023-05-02 04:20:52 | 0:15:51 | 0:06:16 | 0:09:35 | smithi | main | ubuntu | 20.04 | rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/none cluster/1-node k8s/1.21 net/flannel rook/1.7.2} | 1 | |
Failure Reason:
Command failed on smithi099 with status 1: 'sudo systemctl enable --now kubelet && sudo kubeadm config images pull' |
||||||||||||||
pass | 7260295 | 2023-05-02 03:13:04 | 2023-05-02 04:05:02 | 2023-05-02 05:17:31 | 1:12:29 | 1:05:32 | 0:06:57 | smithi | main | rhel | 8.6 | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/fastclose objectstore/bluestore-hybrid rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{rhel_8} thrashers/pggrow thrashosds-health workloads/ec-radosbench} | 2 | |
pass | 7260296 | 2023-05-02 03:13:05 | 2023-05-02 04:05:12 | 2023-05-02 04:28:42 | 0:23:30 | 0:15:57 | 0:07:33 | smithi | main | rhel | 8.6 | rados/multimon/{clusters/3 mon_election/classic msgr-failures/few msgr/async-v1only no_pools objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{rhel_8} tasks/mon_clock_with_skews} | 2 | |
fail | 7260297 | 2023-05-02 03:13:06 | 2023-05-02 04:05:33 | 2023-05-02 04:46:08 | 0:40:35 | 0:32:10 | 0:08:25 | smithi | main | centos | 8.stream | rados/dashboard/{0-single-container-host debug/mgr mon_election/connectivity random-objectstore$/{bluestore-comp-lz4} tasks/e2e} | 2 | |
Failure Reason:
Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi033 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=b5b2f910483790328da0b34d3489800416b45bd6 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh' |
||||||||||||||
fail | 7260298 | 2023-05-02 03:13:06 | 2023-05-02 04:06:33 | 2023-05-02 04:25:31 | 0:18:58 | 0:06:32 | 0:12:26 | smithi | main | ubuntu | 20.04 | rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/radosbench cluster/3-node k8s/1.21 net/host rook/master} | 3 | |
Failure Reason:
Command failed on smithi159 with status 1: 'sudo systemctl enable --now kubelet && sudo kubeadm config images pull' |
||||||||||||||
pass | 7260299 | 2023-05-02 03:13:07 | 2023-05-02 04:07:04 | 2023-05-02 04:34:08 | 0:27:04 | 0:14:47 | 0:12:17 | smithi | main | ubuntu | 22.04 | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-bitmap rados recovery-overrides/{more-active-recovery} supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 | |
fail | 7260300 | 2023-05-02 03:13:08 | 2023-05-02 04:08:04 | 2023-05-02 04:29:33 | 0:21:29 | 0:11:07 | 0:10:22 | smithi | main | ubuntu | 22.04 | rados/singleton-nomsgr/{all/lazy_omap_stats_output mon_election/classic rados supported-random-distro$/{ubuntu_latest}} | 1 | |
Failure Reason:
Command crashed: 'sudo TESTDIR=/home/ubuntu/cephtest bash -c ceph_test_lazy_omap_stats' |
||||||||||||||
pass | 7260301 | 2023-05-02 03:13:09 | 2023-05-02 04:08:04 | 2023-05-02 04:50:41 | 0:42:37 | 0:31:48 | 0:10:49 | smithi | main | ubuntu | 22.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-delay msgr/async-v2only objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/rados_api_tests} | 2 | |
fail | 7260302 | 2023-05-02 03:13:10 | 2023-05-02 04:08:55 | 2023-05-02 04:27:54 | 0:18:59 | 0:12:04 | 0:06:55 | smithi | main | centos | 8.stream | rados/singleton/{all/test_envlibrados_for_rocksdb/{supported/centos_latest test_envlibrados_for_rocksdb} mon_election/connectivity msgr-failures/none msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8}} | 1 | |
Failure Reason:
Command failed (workunit test rados/test_envlibrados_for_rocksdb.sh) on smithi016 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=b5b2f910483790328da0b34d3489800416b45bd6 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_envlibrados_for_rocksdb.sh' |