User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
yuriw | 2022-02-06 16:04:09 | 2022-02-06 16:06:33 | 2022-02-06 22:53:50 | 6:47:17 | rados | wip-yuri2-testing-2022-02-04-1646-pacific | smithi | 6aa4fcc | 16 | 16 | 4 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 6665045 | 2022-02-06 16:05:54 | 2022-02-06 16:06:33 | 2022-02-06 16:23:21 | 0:16:48 | 0:10:58 | 0:05:50 | smithi | master | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_cephadm} | 1 | |
Failure Reason:
Command failed (workunit test cephadm/test_cephadm.sh) on smithi116 with status 127: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=6aa4fcc62bbc85390459e2e69fccdea5b9e83966 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh' |
||||||||||||||
dead | 6665046 | 2022-02-06 16:05:56 | 2022-02-06 16:06:33 | 2022-02-06 22:45:56 | 6:39:23 | smithi | master | centos | 8.stream | rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 6665047 | 2022-02-06 16:05:57 | 2022-02-06 16:21:43 | 501 | smithi | master | centos | 8.stream | rados/cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-services/client-keyring 3-final} | 2 | ||||
Failure Reason:
Command failed on smithi167 with status 22: "sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:6aa4fcc62bbc85390459e2e69fccdea5b9e83966 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e738cc9a-8767-11ec-8c35-001a4aab830c -- bash -c 'ceph orch host label add `hostname` foo'" |
||||||||||||||
pass | 6665048 | 2022-02-06 16:05:58 | 2022-02-06 16:06:34 | 2022-02-06 16:31:57 | 0:25:23 | 0:16:50 | 0:08:33 | smithi | master | ubuntu | 18.04 | rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_18.04 fixed-2 mon_election/classic start} | 2 | |
fail | 6665049 | 2022-02-06 16:05:59 | 2022-02-06 16:06:34 | 2022-02-06 16:19:47 | 0:13:13 | 0:06:52 | 0:06:21 | smithi | master | ubuntu | 20.04 | rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 1-rook 2-workload/radosbench 3-final cluster/1-node k8s/1.21 net/calico rook/master} | 1 | |
Failure Reason:
[Errno 2] Cannot find file on the remote 'ubuntu@smithi175.front.sepia.ceph.com': 'rook/cluster/examples/kubernetes/ceph/operator.yaml' |
||||||||||||||
pass | 6665050 | 2022-02-06 16:06:00 | 2022-02-06 16:06:35 | 2022-02-06 16:35:37 | 0:29:02 | 0:22:04 | 0:06:58 | smithi | master | rhel | 8.4 | rados/singleton/{all/pg-autoscaler-progress-off mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-stupid rados supported-random-distro$/{rhel_8}} | 2 | |
fail | 6665051 | 2022-02-06 16:06:01 | 2022-02-06 16:06:45 | 2022-02-06 16:22:14 | 0:15:29 | 0:08:27 | 0:07:02 | smithi | master | centos | 8.stream | rados/mgr/{clusters/{2-node-mgr} debug/mgr mon_election/connectivity objectstore/bluestore-comp-zstd supported-random-distro$/{centos_8} tasks/module_selftest} | 2 | |
Failure Reason:
Test failure: test_devicehealth (tasks.mgr.test_module_selftest.TestModuleSelftest) |
||||||||||||||
pass | 6665052 | 2022-02-06 16:06:02 | 2022-02-06 16:07:05 | 2022-02-06 17:01:07 | 0:54:02 | 0:44:14 | 0:09:48 | smithi | master | ubuntu | 18.04 | rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/nautilus-v2only backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/few rados thrashers/mapgap thrashosds-health workloads/snaps-few-objects} | 3 | |
fail | 6665053 | 2022-02-06 16:06:03 | 2022-02-06 16:07:46 | 2022-02-06 16:16:49 | 0:09:03 | 0:03:00 | 0:06:03 | smithi | master | ubuntu | 20.04 | rados/dashboard/{centos_8.stream_container_tools clusters/{2-node-mgr} debug/mgr mon_election/classic random-objectstore$/{bluestore-hybrid} supported-random-distro$/{ubuntu_latest} tasks/dashboard} | 2 | |
Failure Reason:
Command failed on smithi033 with status 1: 'TESTDIR=/home/ubuntu/cephtest bash -s' |
||||||||||||||
pass | 6665054 | 2022-02-06 16:06:04 | 2022-02-06 16:07:46 | 2022-02-06 16:35:15 | 0:27:29 | 0:20:15 | 0:07:14 | smithi | master | centos | 8.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v1only objectstore/filestore-xfs rados tasks/rados_api_tests validater/lockdep} | 2 | |
dead | 6665055 | 2022-02-06 16:06:05 | 2022-02-06 16:07:57 | 2022-02-06 22:47:53 | 6:39:56 | smithi | master | centos | 8.stream | rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 6665056 | 2022-02-06 16:06:06 | 2022-02-06 16:07:57 | 2022-02-06 16:26:21 | 0:18:24 | 0:11:40 | 0:06:44 | smithi | master | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_cephadm} | 1 | |
Failure Reason:
Command failed (workunit test cephadm/test_cephadm.sh) on smithi042 with status 127: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=6aa4fcc62bbc85390459e2e69fccdea5b9e83966 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh' |
||||||||||||||
pass | 6665057 | 2022-02-06 16:06:07 | 2022-02-06 16:07:57 | 2022-02-06 16:33:05 | 0:25:08 | 0:13:44 | 0:11:24 | smithi | master | ubuntu | 18.04 | rados/cephadm/smoke-roleless/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-services/nfs2 3-final} | 2 | |
fail | 6665058 | 2022-02-06 16:06:08 | 2022-02-06 16:09:08 | 2022-02-06 16:32:07 | 0:22:59 | 0:17:05 | 0:05:54 | smithi | master | rhel | 8.4 | rados/cephadm/osds/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop 1-start 2-ops/rmdir-reactivate} | 2 | |
Failure Reason:
Command failed on smithi058 with status 127: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:6aa4fcc62bbc85390459e2e69fccdea5b9e83966 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 4773e2ce-8769-11ec-8c35-001a4aab830c -- bash -c \'set -e\nset -x\nceph orch ps\nHOST=$(hostname -s)\nOSD=$(ceph orch ps $HOST | grep osd | head -n 1 | awk \'"\'"\'{print $1}\'"\'"\')\necho "host $HOST, osd $OSD"\nceph orch daemon stop $OSD\nwhile ceph orch ps | grep $OSD | grep running ; do sleep 5 ; done\nceph auth export $OSD > k\nceph orch daemon rm $OSD --force\nceph orch ps --refresh\nwhile ceph orch ps | grep $OSD ; do sleep 5 ; done\nceph auth add $OSD -i k\nceph cephadm osd activate $HOST\nwhile ! ceph orch ps | grep $OSD | grep running ; do sleep 5 ; done\n\'' |
||||||||||||||
pass | 6665059 | 2022-02-06 16:06:09 | 2022-02-06 16:09:18 | 2022-02-06 16:42:35 | 0:33:17 | 0:25:33 | 0:07:44 | smithi | master | centos | 8.stream | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |
pass | 6665060 | 2022-02-06 16:06:10 | 2022-02-06 16:09:48 | 2022-02-06 16:37:26 | 0:27:38 | 0:16:59 | 0:10:39 | smithi | master | ubuntu | 18.04 | rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_18.04 fixed-2 mon_election/connectivity start} | 2 | |
pass | 6665061 | 2022-02-06 16:06:12 | 2022-02-06 16:09:59 | 2022-02-06 16:47:25 | 0:37:26 | 0:28:35 | 0:08:51 | smithi | master | ubuntu | 18.04 | rados/rook/smoke/{0-distro/ubuntu_18.04 0-kubeadm 1-rook 2-workload/radosbench 3-final cluster/1-node k8s/1.21 net/calico rook/1.6.2} | 1 | |
fail | 6665062 | 2022-02-06 16:06:13 | 2022-02-06 16:09:59 | 2022-02-06 16:33:39 | 0:23:40 | 0:16:38 | 0:07:02 | smithi | master | rhel | 8.4 | rados/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop 1-start 2-services/client-keyring 3-final} | 2 | |
Failure Reason:
Command failed on smithi141 with status 22: "sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:6aa4fcc62bbc85390459e2e69fccdea5b9e83966 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 6fb80d82-8769-11ec-8c35-001a4aab830c -- bash -c 'ceph orch host label add `hostname` foo'" |
||||||||||||||
fail | 6665063 | 2022-02-06 16:06:14 | 2022-02-06 16:10:50 | 2022-02-06 16:34:04 | 0:23:14 | 0:13:24 | 0:09:50 | smithi | master | ubuntu | 18.04 | rados/cephadm/osds/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-ops/rm-zap-add} | 2 | |
Failure Reason:
Command failed on smithi047 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:6aa4fcc62bbc85390459e2e69fccdea5b9e83966 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid ffc6e99e-8768-11ec-8c35-001a4aab830c -- bash -c \'set -e\nset -x\nceph orch ps\nceph orch device ls\nDEVID=$(ceph device ls | grep osd.1 | awk \'"\'"\'{print $1}\'"\'"\')\nHOST=$(ceph orch device ls | grep $DEVID | awk \'"\'"\'{print $1}\'"\'"\')\nDEV=$(ceph orch device ls | grep $DEVID | awk \'"\'"\'{print $2}\'"\'"\')\necho "host $HOST, dev $DEV, devid $DEVID"\nceph orch osd rm 1\nwhile ceph orch osd rm status | grep ^1 ; do sleep 5 ; done\nceph orch device zap $HOST $DEV --force\nceph orch daemon add osd $HOST:$DEV\nwhile ! ceph osd dump | grep osd.1 | grep up ; do sleep 5 ; done\n\'' |
||||||||||||||
fail | 6665064 | 2022-02-06 16:06:15 | 2022-02-06 16:10:50 | 2022-02-06 16:29:22 | 0:18:32 | 0:11:30 | 0:07:02 | smithi | master | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_cephadm} | 1 | |
Failure Reason:
Command failed (workunit test cephadm/test_cephadm.sh) on smithi022 with status 127: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=6aa4fcc62bbc85390459e2e69fccdea5b9e83966 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh' |
||||||||||||||
dead | 6665065 | 2022-02-06 16:06:16 | 2022-02-06 16:10:50 | 2022-02-06 22:49:59 | 6:39:09 | smithi | master | centos | 8.stream | rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6665066 | 2022-02-06 16:06:17 | 2022-02-06 16:10:50 | 2022-02-06 16:42:46 | 0:31:56 | 0:24:11 | 0:07:45 | smithi | master | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-dispatch-delay msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/cache-snaps-balanced} | 2 | |
pass | 6665067 | 2022-02-06 16:06:18 | 2022-02-06 16:11:21 | 2022-02-06 16:36:39 | 0:25:18 | 0:16:55 | 0:08:23 | smithi | master | ubuntu | 18.04 | rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_18.04 fixed-2 mon_election/classic start} | 2 | |
fail | 6665068 | 2022-02-06 16:06:19 | 2022-02-06 16:11:21 | 2022-02-06 17:13:24 | 1:02:03 | 0:54:42 | 0:07:21 | smithi | master | rhel | 8.4 | rados/dashboard/{centos_8.stream_container_tools clusters/{2-node-mgr} debug/mgr mon_election/connectivity random-objectstore$/{bluestore-low-osd-mem-target} supported-random-distro$/{rhel_8} tasks/dashboard} | 2 | |
Failure Reason:
Test failure: test_pool_configuration (tasks.mgr.dashboard.test_pool.PoolTest) |
||||||||||||||
pass | 6665069 | 2022-02-06 16:06:20 | 2022-02-06 16:12:02 | 2022-02-06 19:25:11 | 3:13:09 | 3:01:50 | 0:11:19 | smithi | master | ubuntu | 18.04 | rados/upgrade/nautilus-x-singleton/{0-cluster/{openstack start} 1-install/nautilus 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{rbd-cls rbd-import-export readwrite snaps-few-objects} 5-workload/{radosbench rbd_api} 6-finish-upgrade 7-pacific 8-workload/{rbd-python snaps-many-objects} bluestore-bitmap mon_election/connectivity thrashosds-health ubuntu_18.04} | 4 | |
dead | 6665070 | 2022-02-06 16:06:21 | 2022-02-06 16:13:23 | 2022-02-06 22:53:50 | 6:40:27 | smithi | master | centos | 8.stream | rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6665071 | 2022-02-06 16:06:22 | 2022-02-06 16:13:24 | 2022-02-06 16:47:17 | 0:33:53 | 0:27:05 | 0:06:48 | smithi | master | centos | 8.stream | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/few objectstore/bluestore-hybrid rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{centos_8} thrashers/mapgap thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} | 3 | |
fail | 6665072 | 2022-02-06 16:06:23 | 2022-02-06 16:13:44 | 2022-02-06 16:27:12 | 0:13:28 | 0:05:26 | 0:08:02 | smithi | master | ubuntu | 20.04 | rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 1-rook 2-workload/none 3-final cluster/3-node k8s/1.21 net/calico rook/master} | 3 | |
Failure Reason:
[Errno 2] Cannot find file on the remote 'ubuntu@smithi019.front.sepia.ceph.com': 'rook/cluster/examples/kubernetes/ceph/operator.yaml' |
||||||||||||||
pass | 6665073 | 2022-02-06 16:06:24 | 2022-02-06 16:15:15 | 2022-02-06 17:01:06 | 0:45:51 | 0:35:53 | 0:09:58 | smithi | master | ubuntu | 18.04 | rados/cephadm/with-work/{0-distro/ubuntu_18.04 fixed-2 mode/packaged mon_election/classic msgr/async-v1only start tasks/rados_api_tests} | 2 | |
fail | 6665074 | 2022-02-06 16:06:25 | 2022-02-06 16:15:25 | 2022-02-06 16:38:19 | 0:22:54 | 0:16:42 | 0:06:12 | smithi | master | rhel | 8.4 | rados/cephadm/osds/{0-distro/rhel_8.4_container_tools_3.0 0-nvme-loop 1-start 2-ops/rmdir-reactivate} | 2 | |
Failure Reason:
Command failed on smithi110 with status 127: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:6aa4fcc62bbc85390459e2e69fccdea5b9e83966 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 18eb7dee-876a-11ec-8c35-001a4aab830c -- bash -c \'set -e\nset -x\nceph orch ps\nHOST=$(hostname -s)\nOSD=$(ceph orch ps $HOST | grep osd | head -n 1 | awk \'"\'"\'{print $1}\'"\'"\')\necho "host $HOST, osd $OSD"\nceph orch daemon stop $OSD\nwhile ceph orch ps | grep $OSD | grep running ; do sleep 5 ; done\nceph auth export $OSD > k\nceph orch daemon rm $OSD --force\nceph orch ps --refresh\nwhile ceph orch ps | grep $OSD ; do sleep 5 ; done\nceph auth add $OSD -i k\nceph cephadm osd activate $HOST\nwhile ! ceph orch ps | grep $OSD | grep running ; do sleep 5 ; done\n\'' |
||||||||||||||
fail | 6665075 | 2022-02-06 16:06:27 | 2022-02-06 16:15:26 | 2022-02-06 16:32:50 | 0:17:24 | 0:11:11 | 0:06:13 | smithi | master | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_cephadm} | 1 | |
Failure Reason:
Command failed (workunit test cephadm/test_cephadm.sh) on smithi159 with status 127: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=6aa4fcc62bbc85390459e2e69fccdea5b9e83966 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh' |
||||||||||||||
pass | 6665076 | 2022-02-06 16:06:28 | 2022-02-06 16:15:46 | 2022-02-06 16:53:43 | 0:37:57 | 0:26:47 | 0:11:10 | smithi | master | ubuntu | 18.04 | rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/luminous-v1only backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/fastclose rados thrashers/mapgap thrashosds-health workloads/test_rbd_api} | 3 | |
fail | 6665077 | 2022-02-06 16:06:29 | 2022-02-06 16:18:27 | 2022-02-06 16:34:43 | 0:16:16 | 0:08:29 | 0:07:47 | smithi | master | centos | 8.stream | rados/mgr/{clusters/{2-node-mgr} debug/mgr mon_election/classic objectstore/bluestore-comp-snappy supported-random-distro$/{centos_8} tasks/module_selftest} | 2 | |
Failure Reason:
Test failure: test_devicehealth (tasks.mgr.test_module_selftest.TestModuleSelftest) |
||||||||||||||
pass | 6665078 | 2022-02-06 16:06:29 | 2022-02-06 16:18:37 | 2022-02-06 16:51:14 | 0:32:37 | 0:24:20 | 0:08:17 | smithi | master | centos | 8.stream | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |
fail | 6665079 | 2022-02-06 16:06:31 | 2022-02-06 16:18:58 | 2022-02-06 16:43:13 | 0:24:15 | 0:13:05 | 0:11:10 | smithi | master | ubuntu | 20.04 | rados/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/client-keyring 3-final} | 2 | |
Failure Reason:
Command failed on smithi181 with status 22: "sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:6aa4fcc62bbc85390459e2e69fccdea5b9e83966 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 36316cba-876a-11ec-8c35-001a4aab830c -- bash -c 'ceph orch host label add `hostname` foo'" |
||||||||||||||
pass | 6665080 | 2022-02-06 16:06:32 | 2022-02-06 16:19:48 | 2022-02-06 16:47:00 | 0:27:12 | 0:17:02 | 0:10:10 | smithi | master | ubuntu | 18.04 | rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_18.04 fixed-2 mon_election/connectivity start} | 2 |