User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail |
---|---|---|---|---|---|---|---|---|---|---|
benhanokh | 2021-12-09 13:56:44 | 2021-12-09 13:59:14 | 2021-12-09 17:42:38 | 3:43:24 | rados | WIP_GBH_safe_fast_shutdown_7 | smithi | 38b253f | 32 | 25 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 6555281 | 2021-12-09 13:58:13 | 2021-12-09 13:59:13 | 2021-12-09 14:22:27 | 0:23:14 | 0:15:55 | 0:07:19 | smithi | master | centos | 8.stream | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn syntax whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/pacific 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |
Failure Reason:
Command failed on smithi120 with status 32: 'sudo nsenter --net=/var/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage /bin/mount -t ceph :/ /home/ubuntu/cephtest/mnt.0 -v -o norequire_active_mds,conf=/etc/ceph/ceph.conf,norbytes,name=0,mds_namespace=cephfs,nofallback' |
||||||||||||||
pass | 6555282 | 2021-12-09 13:58:14 | 2021-12-09 13:59:14 | 2021-12-09 15:37:02 | 1:37:48 | 1:27:42 | 0:10:06 | smithi | master | centos | 8.3 | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-hybrid rados tasks/rados_api_tests validater/valgrind} | 2 | |
pass | 6555283 | 2021-12-09 13:58:15 | 2021-12-09 13:59:14 | 2021-12-09 14:44:25 | 0:45:11 | 0:38:06 | 0:07:05 | smithi | master | rhel | 8.4 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{rhel_8} thrashers/morepggrow thrashosds-health workloads/rados_api_tests} | 2 | |
fail | 6555284 | 2021-12-09 13:58:16 | 2021-12-09 13:59:14 | 2021-12-09 14:23:16 | 0:24:02 | 0:15:58 | 0:08:04 | smithi | master | centos | 8.stream | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn syntax whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/pacific 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |
Failure Reason:
Command failed on smithi022 with status 32: 'sudo nsenter --net=/var/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage /bin/mount -t ceph :/ /home/ubuntu/cephtest/mnt.0 -v -o norequire_active_mds,conf=/etc/ceph/ceph.conf,norbytes,name=0,mds_namespace=cephfs,nofallback' |
||||||||||||||
pass | 6555285 | 2021-12-09 13:58:17 | 2021-12-09 14:00:05 | 2021-12-09 16:23:18 | 2:23:13 | 2:13:46 | 0:09:27 | smithi | master | centos | 8.3 | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/few objectstore/bluestore-comp-zstd rados recovery-overrides/{default} supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/ec-radosbench} | 2 | |
fail | 6555286 | 2021-12-09 13:58:18 | 2021-12-09 14:00:05 | 2021-12-09 14:24:41 | 0:24:36 | 0:11:53 | 0:12:43 | smithi | master | centos | 8.stream | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn syntax whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |
Failure Reason:
Command failed on smithi111 with status 32: 'sudo nsenter --net=/var/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage /bin/mount -t ceph :/ /home/ubuntu/cephtest/mnt.0 -v -o norequire_active_mds,conf=/etc/ceph/ceph.conf,norbytes,name=0,mds_namespace=cephfs,nofallback' |
||||||||||||||
fail | 6555287 | 2021-12-09 13:58:19 | 2021-12-09 14:04:46 | 2021-12-09 14:37:43 | 0:32:57 | 0:27:36 | 0:05:21 | smithi | master | rhel | 8.4 | rados/standalone/{supported-random-distro$/{rhel_8} workloads/crush} | 1 | |
Failure Reason:
Command failed (workunit test crush/crush-choose-args.sh) on smithi016 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=38b253f7be17fe189c4eab534d1ab245f9660e68 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/crush/crush-choose-args.sh' |
||||||||||||||
fail | 6555288 | 2021-12-09 13:58:20 | 2021-12-09 14:04:46 | 2021-12-09 14:28:23 | 0:23:37 | 0:16:28 | 0:07:09 | smithi | master | centos | 8.stream | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn syntax whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/pacific 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |
Failure Reason:
Command failed on smithi104 with status 32: 'sudo nsenter --net=/var/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage /bin/mount -t ceph :/ /home/ubuntu/cephtest/mnt.0 -v -o norequire_active_mds,conf=/etc/ceph/ceph.conf,norbytes,name=0,mds_namespace=cephfs,nofallback' |
||||||||||||||
pass | 6555289 | 2021-12-09 13:58:21 | 2021-12-09 14:04:57 | 2021-12-09 14:30:24 | 0:25:27 | 0:17:50 | 0:07:37 | smithi | master | rhel | 8.4 | rados/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop 1-start 2-services/nfs2 3-final} | 2 | |
pass | 6555290 | 2021-12-09 13:58:22 | 2021-12-09 14:04:57 | 2021-12-09 14:55:01 | 0:50:04 | 0:39:54 | 0:10:10 | smithi | master | centos | 8.3 | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v1only objectstore/bluestore-stupid rados tasks/mon_recovery validater/valgrind} | 2 | |
fail | 6555291 | 2021-12-09 13:58:23 | 2021-12-09 14:05:48 | 2021-12-09 14:47:00 | 0:41:12 | 0:30:50 | 0:10:22 | smithi | master | centos | 8.3 | rados/standalone/{supported-random-distro$/{centos_8} workloads/erasure-code} | 1 | |
Failure Reason:
Command failed (workunit test erasure-code/test-erasure-code.sh) on smithi059 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=38b253f7be17fe189c4eab534d1ab245f9660e68 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/erasure-code/test-erasure-code.sh' |
||||||||||||||
fail | 6555292 | 2021-12-09 13:58:24 | 2021-12-09 14:05:48 | 2021-12-09 14:26:03 | 0:20:15 | 0:12:16 | 0:07:59 | smithi | master | centos | 8.stream | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn syntax whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |
Failure Reason:
Command failed on smithi018 with status 32: 'sudo nsenter --net=/var/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage /bin/mount -t ceph :/ /home/ubuntu/cephtest/mnt.0 -v -o norequire_active_mds,conf=/etc/ceph/ceph.conf,norbytes,name=0,mds_namespace=cephfs,nofallback' |
||||||||||||||
fail | 6555293 | 2021-12-09 13:58:25 | 2021-12-09 14:06:48 | 2021-12-09 14:23:09 | 0:16:21 | 0:06:06 | 0:10:15 | smithi | master | ubuntu | 20.04 | rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/none 3-final cluster/3-node k8s/1.21 net/host rook/master} | 3 | |
Failure Reason:
[Errno 2] Cannot find file on the remote 'ubuntu@smithi013.front.sepia.ceph.com': 'rook/cluster/examples/kubernetes/ceph/operator.yaml' |
||||||||||||||
fail | 6555294 | 2021-12-09 13:58:26 | 2021-12-09 14:07:09 | 2021-12-09 14:33:04 | 0:25:55 | 0:17:12 | 0:08:43 | smithi | master | centos | 8.stream | rados/dashboard/{0-single-container-host debug/mgr mon_election/connectivity random-objectstore$/{bluestore-hybrid} tasks/e2e} | 2 | |
Failure Reason:
Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi084 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=38b253f7be17fe189c4eab534d1ab245f9660e68 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh' |
||||||||||||||
fail | 6555295 | 2021-12-09 13:58:27 | 2021-12-09 14:07:49 | 2021-12-09 14:27:00 | 0:19:11 | 0:12:01 | 0:07:10 | smithi | master | centos | 8.stream | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn syntax whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |
Failure Reason:
Command failed on smithi006 with status 32: 'sudo nsenter --net=/var/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage /bin/mount -t ceph :/ /home/ubuntu/cephtest/mnt.0 -v -o norequire_active_mds,conf=/etc/ceph/ceph.conf,norbytes,name=0,mds_namespace=cephfs,nofallback' |
||||||||||||||
fail | 6555296 | 2021-12-09 13:58:28 | 2021-12-09 14:07:49 | 2021-12-09 14:36:03 | 0:28:14 | 0:15:54 | 0:12:20 | smithi | master | centos | 8.stream | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn syntax whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/pacific 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |
Failure Reason:
Command failed on smithi118 with status 32: 'sudo nsenter --net=/var/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage /bin/mount -t ceph :/ /home/ubuntu/cephtest/mnt.0 -v -o norequire_active_mds,conf=/etc/ceph/ceph.conf,norbytes,name=0,mds_namespace=cephfs,nofallback' |
||||||||||||||
pass | 6555297 | 2021-12-09 13:58:29 | 2021-12-09 14:12:21 | 2021-12-09 15:38:12 | 1:25:51 | 1:15:53 | 0:09:58 | smithi | master | centos | 8.3 | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-bitmap rados tasks/rados_api_tests validater/valgrind} | 2 | |
pass | 6555298 | 2021-12-09 13:58:30 | 2021-12-09 14:12:41 | 2021-12-09 14:43:07 | 0:30:26 | 0:21:58 | 0:08:28 | smithi | master | rhel | 8.4 | rados/singleton/{all/pg-autoscaler mon_election/classic msgr-failures/many msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{rhel_8}} | 2 | |
fail | 6555299 | 2021-12-09 13:58:31 | 2021-12-09 14:13:41 | 2021-12-09 14:33:23 | 0:19:42 | 0:12:07 | 0:07:35 | smithi | master | centos | 8.stream | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn syntax whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |
Failure Reason:
Command failed on smithi023 with status 32: 'sudo nsenter --net=/var/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage /bin/mount -t ceph :/ /home/ubuntu/cephtest/mnt.0 -v -o norequire_active_mds,conf=/etc/ceph/ceph.conf,norbytes,name=0,mds_namespace=cephfs,nofallback' |
||||||||||||||
fail | 6555300 | 2021-12-09 13:58:32 | 2021-12-09 14:13:42 | 2021-12-09 14:53:41 | 0:39:59 | 0:28:31 | 0:11:28 | smithi | master | ubuntu | 20.04 | rados/standalone/{supported-random-distro$/{ubuntu_latest} workloads/misc} | 1 | |
Failure Reason:
Command failed (workunit test misc/test-ceph-helpers.sh) on smithi187 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=38b253f7be17fe189c4eab534d1ab245f9660e68 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/misc/test-ceph-helpers.sh' |
||||||||||||||
fail | 6555301 | 2021-12-09 13:58:32 | 2021-12-09 14:13:52 | 2021-12-09 14:38:44 | 0:24:52 | 0:16:10 | 0:08:42 | smithi | master | centos | 8.stream | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn syntax whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/pacific 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |
Failure Reason:
Command failed on smithi012 with status 32: 'sudo nsenter --net=/var/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage /bin/mount -t ceph :/ /home/ubuntu/cephtest/mnt.0 -v -o norequire_active_mds,conf=/etc/ceph/ceph.conf,norbytes,name=0,mds_namespace=cephfs,nofallback' |
||||||||||||||
pass | 6555302 | 2021-12-09 13:58:33 | 2021-12-09 14:14:32 | 2021-12-09 14:43:18 | 0:28:46 | 0:17:33 | 0:11:13 | smithi | master | ubuntu | 20.04 | rados/cephadm/smoke/{0-distro/ubuntu_20.04 0-nvme-loop agent/off fixed-2 mon_election/connectivity start} | 2 | |
pass | 6555303 | 2021-12-09 13:58:34 | 2021-12-09 14:14:53 | 2021-12-09 14:45:43 | 0:30:50 | 0:15:37 | 0:15:13 | smithi | master | ubuntu | 20.04 | rados/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-user 3-final} | 2 | |
pass | 6555304 | 2021-12-09 13:58:35 | 2021-12-09 14:19:34 | 2021-12-09 15:05:51 | 0:46:17 | 0:34:43 | 0:11:34 | smithi | master | ubuntu | 20.04 | rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/radosbench 3-final cluster/3-node k8s/1.21 net/calico rook/1.7.0} | 3 | |
pass | 6555305 | 2021-12-09 13:58:36 | 2021-12-09 14:20:04 | 2021-12-09 14:46:00 | 0:25:56 | 0:16:29 | 0:09:27 | smithi | master | ubuntu | 20.04 | rados/singleton-nomsgr/{all/admin_socket_output mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 6555306 | 2021-12-09 13:58:37 | 2021-12-09 14:20:04 | 2021-12-09 14:58:25 | 0:38:21 | 0:26:00 | 0:12:21 | smithi | master | ubuntu | 20.04 | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few objectstore/bluestore-comp-snappy rados recovery-overrides/{default} supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 | |
pass | 6555307 | 2021-12-09 13:58:38 | 2021-12-09 14:22:35 | 2021-12-09 14:42:48 | 0:20:13 | 0:09:28 | 0:10:45 | smithi | master | ubuntu | 20.04 | rados/perf/{ceph mon_election/classic objectstore/bluestore-comp openstack scheduler/dmclock_1Shard_16Threads settings/optimized ubuntu_latest workloads/radosbench_4M_write} | 1 | |
pass | 6555308 | 2021-12-09 13:58:39 | 2021-12-09 14:23:15 | 2021-12-09 14:45:10 | 0:21:55 | 0:11:00 | 0:10:55 | smithi | master | ubuntu | 20.04 | rados/multimon/{clusters/9 mon_election/classic msgr-failures/few msgr/async-v1only no_pools objectstore/bluestore-comp-zlib rados supported-random-distro$/{ubuntu_latest} tasks/mon_recovery} | 3 | |
pass | 6555309 | 2021-12-09 13:58:40 | 2021-12-09 14:23:26 | 2021-12-09 14:49:27 | 0:26:01 | 0:13:28 | 0:12:33 | smithi | master | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{default} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/cache-agent-small} | 2 | |
fail | 6555310 | 2021-12-09 13:58:41 | 2021-12-09 14:24:46 | 2021-12-09 14:48:01 | 0:23:15 | 0:16:12 | 0:07:03 | smithi | master | centos | 8.stream | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn syntax whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/pacific 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |
Failure Reason:
Command failed on smithi085 with status 32: 'sudo nsenter --net=/var/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage /bin/mount -t ceph :/ /home/ubuntu/cephtest/mnt.0 -v -o norequire_active_mds,conf=/etc/ceph/ceph.conf,norbytes,name=0,mds_namespace=cephfs,nofallback' |
||||||||||||||
pass | 6555311 | 2021-12-09 13:58:42 | 2021-12-09 14:24:47 | 2021-12-09 14:50:21 | 0:25:34 | 0:15:00 | 0:10:34 | smithi | master | ubuntu | 20.04 | rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/mon-delay msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest} thrashers/force-sync-many workloads/pool-create-delete} | 2 | |
pass | 6555312 | 2021-12-09 13:58:43 | 2021-12-09 14:24:47 | 2021-12-09 15:16:00 | 0:51:13 | 0:39:46 | 0:11:27 | smithi | master | ubuntu | 20.04 | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/osd-delay objectstore/bluestore-bitmap rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=clay-k=4-m=2} | 2 | |
pass | 6555313 | 2021-12-09 13:58:44 | 2021-12-09 14:25:17 | 2021-12-09 14:59:04 | 0:33:47 | 0:22:27 | 0:11:20 | smithi | master | ubuntu | 20.04 | rados/perf/{ceph mon_election/connectivity objectstore/bluestore-low-osd-mem-target openstack scheduler/dmclock_default_shards settings/optimized ubuntu_latest workloads/radosbench_omap_write} | 1 | |
pass | 6555314 | 2021-12-09 13:58:45 | 2021-12-09 14:25:18 | 2021-12-09 15:02:07 | 0:36:49 | 0:26:00 | 0:10:49 | smithi | master | ubuntu | 20.04 | rados/cephadm/with-work/{0-distro/ubuntu_20.04 fixed-2 mode/packaged mon_election/connectivity msgr/async-v2only start tasks/rados_python} | 2 | |
pass | 6555315 | 2021-12-09 13:58:46 | 2021-12-09 14:49:35 | 2021-12-09 15:09:13 | 0:19:38 | 0:08:54 | 0:10:44 | smithi | master | ubuntu | 20.04 | rados/singleton/{all/test-crash mon_election/connectivity msgr-failures/none msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 6555316 | 2021-12-09 13:58:47 | 2021-12-09 14:49:45 | 2021-12-09 15:20:07 | 0:30:22 | 0:20:27 | 0:09:55 | smithi | master | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{default} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{ubuntu_latest} thrashers/morepggrow thrashosds-health workloads/cache-pool-snaps} | 2 | |
fail | 6555317 | 2021-12-09 13:58:48 | 2021-12-09 14:50:25 | 2021-12-09 15:09:14 | 0:18:49 | 0:09:54 | 0:08:55 | smithi | master | centos | 8.stream | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn syntax whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |
Failure Reason:
Command failed on smithi097 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v16.2.4 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 3386c646-5901-11ec-8c30-001a4aab830c -- ceph orch daemon add osd smithi097:vg_nvme/lv_4' |
||||||||||||||
pass | 6555318 | 2021-12-09 13:58:49 | 2021-12-09 14:51:36 | 2021-12-09 15:12:47 | 0:21:11 | 0:10:19 | 0:10:52 | smithi | master | centos | 8.stream | rados/cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-services/nfs 3-final} | 2 | |
pass | 6555319 | 2021-12-09 13:58:50 | 2021-12-09 14:55:07 | 2021-12-09 15:44:20 | 0:49:13 | 0:39:40 | 0:09:33 | smithi | master | centos | 8.3 | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-snappy rados tasks/mon_recovery validater/valgrind} | 2 | |
pass | 6555320 | 2021-12-09 13:58:51 | 2021-12-09 14:55:07 | 2021-12-09 15:20:40 | 0:25:33 | 0:15:34 | 0:09:59 | smithi | master | rados/cephadm/workunits/{agent/on mon_election/connectivity task/test_cephadm} | 1 | |||
pass | 6555321 | 2021-12-09 13:58:52 | 2021-12-09 14:55:07 | 2021-12-09 15:34:22 | 0:39:15 | 0:28:21 | 0:10:54 | smithi | master | ubuntu | 20.04 | rados/objectstore/{backends/objectcacher-stress supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 6555322 | 2021-12-09 13:58:53 | 2021-12-09 14:56:48 | 2021-12-09 15:40:59 | 0:44:11 | 0:31:42 | 0:12:29 | smithi | master | ubuntu | 20.04 | rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04-15.2.9 2-repo_digest/defaut 3-start-upgrade 4-wait 5-upgrade-ls agent/on mon_election/connectivity} | 2 | |
pass | 6555323 | 2021-12-09 13:58:54 | 2021-12-09 14:57:48 | 2021-12-09 15:18:35 | 0:20:47 | 0:08:32 | 0:12:15 | smithi | master | ubuntu | 20.04 | rados/monthrash/{ceph clusters/9-mons mon_election/connectivity msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{ubuntu_latest} thrashers/many workloads/rados_5925} | 2 | |
fail | 6555324 | 2021-12-09 13:58:55 | 2021-12-09 14:58:29 | 2021-12-09 15:19:02 | 0:20:33 | 0:12:16 | 0:08:17 | smithi | master | centos | 8.stream | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn syntax whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/pacific 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |
Failure Reason:
Command failed on smithi158 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/daemon-base:latest-pacific shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 4c80dc80-5902-11ec-8c30-001a4aab830c -- ceph mon dump -f json' |
||||||||||||||
fail | 6555325 | 2021-12-09 13:58:56 | 2021-12-09 14:58:59 | 2021-12-09 15:34:18 | 0:35:19 | 0:27:51 | 0:07:28 | smithi | master | rhel | 8.4 | rados/standalone/{supported-random-distro$/{rhel_8} workloads/osd-backfill} | 1 | |
Failure Reason:
Command failed (workunit test osd-backfill/osd-backfill-prio.sh) on smithi160 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=38b253f7be17fe189c4eab534d1ab245f9660e68 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/osd-backfill/osd-backfill-prio.sh' |
||||||||||||||
pass | 6555326 | 2021-12-09 13:58:57 | 2021-12-09 14:58:59 | 2021-12-09 15:33:36 | 0:34:37 | 0:24:11 | 0:10:26 | smithi | master | rhel | 8.4 | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/fastclose objectstore/bluestore-comp-lz4 rados recovery-overrides/{default} supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 | |
fail | 6555327 | 2021-12-09 13:58:58 | 2021-12-09 15:02:10 | 2021-12-09 15:22:19 | 0:20:09 | 0:12:02 | 0:08:07 | smithi | master | centos | 8.stream | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn syntax whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |
Failure Reason:
Command failed on smithi139 with status 32: 'sudo nsenter --net=/var/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage /bin/mount -t ceph :/ /home/ubuntu/cephtest/mnt.0 -v -o norequire_active_mds,conf=/etc/ceph/ceph.conf,norbytes,name=0,mds_namespace=cephfs,nofallback' |
||||||||||||||
fail | 6555328 | 2021-12-09 13:58:59 | 2021-12-09 15:03:01 | 2021-12-09 15:18:42 | 0:15:41 | 0:05:09 | 0:10:32 | smithi | master | ubuntu | 20.04 | rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/none 3-final cluster/1-node k8s/1.21 net/flannel rook/master} | 1 | |
Failure Reason:
[Errno 2] Cannot find file on the remote 'ubuntu@smithi093.front.sepia.ceph.com': 'rook/cluster/examples/kubernetes/ceph/operator.yaml' |
||||||||||||||
fail | 6555329 | 2021-12-09 13:58:59 | 2021-12-09 15:03:01 | 2021-12-09 15:27:24 | 0:24:23 | 0:17:03 | 0:07:20 | smithi | master | centos | 8.stream | rados/dashboard/{0-single-container-host debug/mgr mon_election/classic random-objectstore$/{bluestore-comp-snappy} tasks/e2e} | 2 | |
Failure Reason:
Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi084 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=38b253f7be17fe189c4eab534d1ab245f9660e68 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh' |
||||||||||||||
fail | 6555330 | 2021-12-09 13:59:00 | 2021-12-09 15:04:21 | 2021-12-09 15:25:54 | 0:21:33 | 0:14:33 | 0:07:00 | smithi | master | centos | 8.stream | rados/standalone/{supported-random-distro$/{centos_8.stream} workloads/osd} | 1 | |
Failure Reason:
Command failed (workunit test osd/bad-inc-map.sh) on smithi115 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=38b253f7be17fe189c4eab534d1ab245f9660e68 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/osd/bad-inc-map.sh' |
||||||||||||||
fail | 6555331 | 2021-12-09 13:59:01 | 2021-12-09 15:04:52 | 2021-12-09 15:24:44 | 0:19:52 | 0:11:54 | 0:07:58 | smithi | master | centos | 8.stream | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn syntax whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |
Failure Reason:
Command failed on smithi094 with status 32: 'sudo nsenter --net=/var/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage /bin/mount -t ceph :/ /home/ubuntu/cephtest/mnt.0 -v -o norequire_active_mds,conf=/etc/ceph/ceph.conf,norbytes,name=0,mds_namespace=cephfs,nofallback' |
||||||||||||||
pass | 6555332 | 2021-12-09 13:59:02 | 2021-12-09 15:05:42 | 2021-12-09 17:33:13 | 2:27:31 | 2:18:12 | 0:09:19 | smithi | master | centos | 8.3 | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-zstd rados tasks/rados_cls_all validater/valgrind} | 2 | |
fail | 6555333 | 2021-12-09 13:59:03 | 2021-12-09 15:05:52 | 2021-12-09 15:28:56 | 0:23:04 | 0:16:22 | 0:06:42 | smithi | master | centos | 8.stream | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn syntax whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/pacific 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |
Failure Reason:
Command failed on smithi055 with status 32: 'sudo nsenter --net=/var/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage /bin/mount -t ceph :/ /home/ubuntu/cephtest/mnt.0 -v -o norequire_active_mds,conf=/etc/ceph/ceph.conf,norbytes,name=0,mds_namespace=cephfs,nofallback' |
||||||||||||||
pass | 6555334 | 2021-12-09 13:59:04 | 2021-12-09 15:05:53 | 2021-12-09 16:23:40 | 1:17:47 | 1:07:35 | 0:10:12 | smithi | master | centos | 8.3 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{centos_8} thrashers/pggrow thrashosds-health workloads/radosbench-high-concurrency} | 2 | |
pass | 6555335 | 2021-12-09 13:59:05 | 2021-12-09 15:06:33 | 2021-12-09 17:42:38 | 2:36:05 | 2:24:42 | 0:11:23 | smithi | master | ubuntu | 20.04 | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/few objectstore/bluestore-comp-zlib rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/ec-radosbench} | 2 | |
pass | 6555336 | 2021-12-09 13:59:06 | 2021-12-09 15:06:44 | 2021-12-09 16:15:14 | 1:08:30 | 0:57:46 | 0:10:44 | smithi | master | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/radosbench} | 2 | |
fail | 6555337 | 2021-12-09 13:59:07 | 2021-12-09 15:06:54 | 2021-12-09 16:07:49 | 1:00:55 | 0:53:40 | 0:07:15 | smithi | master | rhel | 8.4 | rados/standalone/{supported-random-distro$/{rhel_8} workloads/scrub} | 1 | |
Failure Reason:
Command failed (workunit test scrub/osd-scrub-repair.sh) on smithi035 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=38b253f7be17fe189c4eab534d1ab245f9660e68 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/scrub/osd-scrub-repair.sh' |