Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 6551751 2021-12-07 22:57:42 2021-12-07 22:59:10 2021-12-07 23:52:29 0:53:19 0:44:06 0:09:13 smithi master centos 8.3 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/crush-compat mon_election/connectivity msgr-failures/osd-delay msgr/async-v1only objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8} thrashers/morepggrow thrashosds-health workloads/radosbench-high-concurrency} 2
pass 6551752 2021-12-07 22:57:43 2021-12-07 22:59:11 2021-12-07 23:34:40 0:35:29 0:28:01 0:07:28 smithi master rhel 8.4 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/on mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-hybrid rados supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/redirect_promote_tests} 2
fail 6551753 2021-12-07 22:57:44 2021-12-07 22:59:31 2021-12-07 23:23:20 0:23:49 0:16:20 0:07:29 smithi master centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn syntax whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/pacific 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Command failed on smithi045 with status 32: 'sudo nsenter --net=/var/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage /bin/mount -t ceph :/ /home/ubuntu/cephtest/mnt.0 -v -o norequire_active_mds,conf=/etc/ceph/ceph.conf,norbytes,name=0,mds_namespace=cephfs,nofallback'

pass 6551754 2021-12-07 22:57:45 2021-12-07 22:59:42 2021-12-07 23:45:29 0:45:47 0:33:01 0:12:46 smithi master centos 8.3 rados/thrash-old-clients/{0-distro$/{centos_8.3_container_tools_3.0} 0-size-min-size-overrides/3-size-2-min-size 1-install/nautilus-v2only backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat mon_election/connectivity msgr-failures/few rados thrashers/morepggrow thrashosds-health workloads/cache-snaps} 3
pass 6551755 2021-12-07 22:57:46 2021-12-07 23:02:02 2021-12-07 23:21:45 0:19:43 0:09:45 0:09:58 smithi master ubuntu 20.04 rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-stupid rados recovery-overrides/{more-async-recovery} supported-random-distro$/{ubuntu_latest} thrashers/mapgap thrashosds-health workloads/ec-rados-plugin=lrc-k=4-m=2-l=3} 3
pass 6551756 2021-12-07 22:57:47 2021-12-07 23:02:03 2021-12-07 23:33:39 0:31:36 0:25:43 0:05:53 smithi master rhel 8.4 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{default} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/crush-compat mon_election/connectivity msgr-failures/osd-delay msgr/async-v1only objectstore/bluestore-hybrid rados supported-random-distro$/{rhel_8} thrashers/morepggrow thrashosds-health workloads/write_fadvise_dontneed} 2
pass 6551757 2021-12-07 22:57:48 2021-12-07 23:02:13 2021-12-07 23:25:40 0:23:27 0:14:09 0:09:18 smithi master rados/cephadm/workunits/{agent/on mon_election/classic task/test_orch_cli} 1
fail 6551758 2021-12-07 22:57:49 2021-12-07 23:02:13 2021-12-07 23:21:40 0:19:27 0:12:10 0:07:17 smithi master centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn syntax whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Command failed on smithi040 with status 32: 'sudo nsenter --net=/var/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage /bin/mount -t ceph :/ /home/ubuntu/cephtest/mnt.0 -v -o norequire_active_mds,conf=/etc/ceph/ceph.conf,norbytes,name=0,mds_namespace=cephfs,nofallback'

pass 6551759 2021-12-07 22:57:50 2021-12-07 23:02:34 2021-12-07 23:49:42 0:47:08 0:41:45 0:05:23 smithi master rhel 8.4 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{default} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/on mon_election/classic msgr-failures/osd-dispatch-delay msgr/async objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8} thrashers/mapgap thrashosds-health workloads/cache-pool-snaps} 2
pass 6551760 2021-12-07 22:57:51 2021-12-07 23:02:44 2021-12-07 23:20:16 0:17:32 0:11:08 0:06:24 smithi master centos 8.stream rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/on mon_election/classic msgr-failures/osd-dispatch-delay msgr/async-v1only objectstore/bluestore-hybrid rados supported-random-distro$/{centos_8.stream} thrashers/careful thrashosds-health workloads/dedup-io-mixed} 2
fail 6551761 2021-12-07 22:57:52 2021-12-07 23:03:04 2021-12-07 23:28:50 0:25:46 0:16:13 0:09:33 smithi master centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn syntax whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/pacific 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Command failed on smithi116 with status 32: 'sudo nsenter --net=/var/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage /bin/mount -t ceph :/ /home/ubuntu/cephtest/mnt.0 -v -o norequire_active_mds,conf=/etc/ceph/ceph.conf,norbytes,name=0,mds_namespace=cephfs,nofallback'

fail 6551762 2021-12-07 22:57:53 2021-12-07 23:05:35 2021-12-07 23:25:41 0:20:06 0:11:46 0:08:20 smithi master centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn syntax whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Command failed on smithi078 with status 32: 'sudo nsenter --net=/var/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage /bin/mount -t ceph :/ /home/ubuntu/cephtest/mnt.0 -v -o norequire_active_mds,conf=/etc/ceph/ceph.conf,norbytes,name=0,mds_namespace=cephfs,nofallback'

fail 6551763 2021-12-07 22:57:54 2021-12-07 23:06:16 2021-12-07 23:27:41 0:21:25 0:15:07 0:06:18 smithi master centos 8.stream rados/dashboard/{0-single-container-host debug/mgr mon_election/connectivity random-objectstore$/{bluestore-bitmap} tasks/e2e} 2
Failure Reason:

Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi018 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=f03ae1e70adb8dbd5e8f8833fef083e19ed4cb2c TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'

fail 6551764 2021-12-07 22:57:55 2021-12-07 23:06:36 2021-12-07 23:26:12 0:19:36 0:05:54 0:13:42 smithi master ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/radosbench 3-final cluster/1-node k8s/1.21 net/calico rook/master} 1
Failure Reason:

[Errno 2] Cannot find file on the remote 'ubuntu@smithi042.front.sepia.ceph.com': 'rook/cluster/examples/kubernetes/ceph/operator.yaml'

fail 6551765 2021-12-07 22:57:56 2021-12-07 23:10:47 2021-12-07 23:30:08 0:19:21 0:11:56 0:07:25 smithi master centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn syntax whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Command failed on smithi160 with status 32: 'sudo nsenter --net=/var/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage /bin/mount -t ceph :/ /home/ubuntu/cephtest/mnt.0 -v -o norequire_active_mds,conf=/etc/ceph/ceph.conf,norbytes,name=0,mds_namespace=cephfs,nofallback'

pass 6551766 2021-12-07 22:57:57 2021-12-07 23:10:57 2021-12-07 23:53:14 0:42:17 0:34:13 0:08:04 smithi master centos 8.stream rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/osd-delay objectstore/bluestore-comp-lz4 rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{centos_8.stream} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} 3
fail 6551767 2021-12-07 22:57:58 2021-12-07 23:11:48 2021-12-07 23:35:19 0:23:31 0:15:59 0:07:32 smithi master centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn syntax whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/pacific 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Command failed on smithi110 with status 32: 'sudo nsenter --net=/var/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage /bin/mount -t ceph :/ /home/ubuntu/cephtest/mnt.0 -v -o norequire_active_mds,conf=/etc/ceph/ceph.conf,norbytes,name=0,mds_namespace=cephfs,nofallback'

dead 6551768 2021-12-07 22:57:59 2021-12-07 23:12:08 2021-12-08 11:22:48 12:10:40 smithi master centos 8.stream rados/cephadm/mgr-nfs-upgrade/{0-centos_8.stream_container_tools 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

hit max job timeout

fail 6551769 2021-12-07 22:58:00 2021-12-07 23:12:08 2021-12-07 23:32:00 0:19:52 0:12:01 0:07:51 smithi master centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn syntax whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Command failed on smithi065 with status 32: 'sudo nsenter --net=/var/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage /bin/mount -t ceph :/ /home/ubuntu/cephtest/mnt.0 -v -o norequire_active_mds,conf=/etc/ceph/ceph.conf,norbytes,name=0,mds_namespace=cephfs,nofallback'

fail 6551770 2021-12-07 22:58:01 2021-12-07 23:12:39 2021-12-07 23:37:35 0:24:56 0:15:47 0:09:09 smithi master centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn syntax whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/pacific 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Command failed on smithi002 with status 32: 'sudo nsenter --net=/var/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage /bin/mount -t ceph :/ /home/ubuntu/cephtest/mnt.0 -v -o norequire_active_mds,conf=/etc/ceph/ceph.conf,norbytes,name=0,mds_namespace=cephfs,nofallback'

fail 6551771 2021-12-07 22:58:02 2021-12-07 23:14:19 2021-12-07 23:37:40 0:23:21 0:15:44 0:07:37 smithi master centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn syntax whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/pacific 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Command failed on smithi063 with status 32: 'sudo nsenter --net=/var/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage /bin/mount -t ceph :/ /home/ubuntu/cephtest/mnt.0 -v -o norequire_active_mds,conf=/etc/ceph/ceph.conf,norbytes,name=0,mds_namespace=cephfs,nofallback'

fail 6551772 2021-12-07 22:58:03 2021-12-07 23:14:30 2021-12-07 23:34:37 0:20:07 0:11:39 0:08:28 smithi master centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn syntax whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Command failed on smithi096 with status 32: 'sudo nsenter --net=/var/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage /bin/mount -t ceph :/ /home/ubuntu/cephtest/mnt.0 -v -o norequire_active_mds,conf=/etc/ceph/ceph.conf,norbytes,name=0,mds_namespace=cephfs,nofallback'

pass 6551773 2021-12-07 22:58:04 2021-12-07 23:15:20 2021-12-07 23:30:54 0:15:34 0:08:53 0:06:41 smithi master centos 8.stream rados/singleton/{all/pg-autoscaler-progress-off mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8.stream}} 2
fail 6551774 2021-12-07 22:58:05 2021-12-07 23:15:41 2021-12-07 23:40:22 0:24:41 0:16:50 0:07:51 smithi master centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn syntax whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/pacific 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Command failed on smithi029 with status 32: 'sudo nsenter --net=/var/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage /bin/mount -t ceph :/ /home/ubuntu/cephtest/mnt.0 -v -o norequire_active_mds,conf=/etc/ceph/ceph.conf,norbytes,name=0,mds_namespace=cephfs,nofallback'

dead 6551775 2021-12-07 22:58:06 2021-12-07 23:16:41 2021-12-08 11:32:23 12:15:42 smithi master rhel 8.4 rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/osd-delay objectstore/bluestore-hybrid rados recovery-overrides/{more-active-recovery} supported-random-distro$/{rhel_8} thrashers/mapgap thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} 3
Failure Reason:

hit max job timeout

fail 6551776 2021-12-07 22:58:07 2021-12-07 23:20:02 2021-12-07 23:39:33 0:19:31 0:11:54 0:07:37 smithi master centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn syntax whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Command failed on smithi073 with status 32: 'sudo nsenter --net=/var/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage /bin/mount -t ceph :/ /home/ubuntu/cephtest/mnt.0 -v -o norequire_active_mds,conf=/etc/ceph/ceph.conf,norbytes,name=0,mds_namespace=cephfs,nofallback'

fail 6551777 2021-12-07 22:58:08 2021-12-07 23:20:22 2021-12-07 23:42:09 0:21:47 0:14:39 0:07:08 smithi master centos 8.stream rados/dashboard/{0-single-container-host debug/mgr mon_election/classic random-objectstore$/{filestore-xfs} tasks/e2e} 2
Failure Reason:

Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi084 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=f03ae1e70adb8dbd5e8f8833fef083e19ed4cb2c TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'

fail 6551778 2021-12-07 22:58:09 2021-12-07 23:20:42 2021-12-07 23:36:43 0:16:01 0:05:35 0:10:26 smithi master ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/radosbench 3-final cluster/1-node k8s/1.21 net/host rook/master} 1
Failure Reason:

[Errno 2] Cannot find file on the remote 'ubuntu@smithi174.front.sepia.ceph.com': 'rook/cluster/examples/kubernetes/ceph/operator.yaml'

fail 6551779 2021-12-07 22:58:10 2021-12-07 23:21:03 2021-12-07 23:40:40 0:19:37 0:11:46 0:07:51 smithi master centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn syntax whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Command failed on smithi105 with status 32: 'sudo nsenter --net=/var/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage /bin/mount -t ceph :/ /home/ubuntu/cephtest/mnt.0 -v -o norequire_active_mds,conf=/etc/ceph/ceph.conf,norbytes,name=0,mds_namespace=cephfs,nofallback'

fail 6551780 2021-12-07 22:58:11 2021-12-07 23:21:23 2021-12-07 23:44:47 0:23:24 0:15:59 0:07:25 smithi master centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn syntax whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/pacific 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Command failed on smithi040 with status 32: 'sudo nsenter --net=/var/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage /bin/mount -t ceph :/ /home/ubuntu/cephtest/mnt.0 -v -o norequire_active_mds,conf=/etc/ceph/ceph.conf,norbytes,name=0,mds_namespace=cephfs,nofallback'

dead 6551781 2021-12-07 22:58:12 2021-12-07 23:21:44 2021-12-08 11:32:34 12:10:50 smithi master centos 8.stream rados/cephadm/mgr-nfs-upgrade/{0-centos_8.stream_container_tools 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

hit max job timeout

fail 6551782 2021-12-07 22:58:12 2021-12-07 23:21:54 2021-12-07 23:41:05 0:19:11 0:11:54 0:07:17 smithi master centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn syntax whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Command failed on smithi016 with status 32: 'sudo nsenter --net=/var/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage /bin/mount -t ceph :/ /home/ubuntu/cephtest/mnt.0 -v -o norequire_active_mds,conf=/etc/ceph/ceph.conf,norbytes,name=0,mds_namespace=cephfs,nofallback'