User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Fail |
---|---|---|---|---|---|---|---|---|---|
khiremat | 2021-05-14 12:49:03 | 2021-05-14 12:49:53 | 2021-05-14 13:09:01 | 0:19:08 | fs | wip-khiremat-39910-mgr-hang-osd-full-testing-35 | smithi | 6263879 | 3 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 6115325 | 2021-05-14 12:49:10 | 2021-05-14 12:49:53 | 2021-05-14 13:08:37 | 0:18:44 | 0:09:42 | 0:09:02 | smithi | master | centos | 8.3 | fs/full/{begin clusters/1-node-1-mds-1-osd conf/{client mds mon osd} distro/{centos_8} mount/fuse objectstore/bluestore-bitmap overrides overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/mgr-osd-full} | 1 | |
Failure Reason:
Command failed (workunit test fs/full/subvolume_rm.sh) on smithi038 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=62638798bcc99e8e8798e9ef35ca6ca77b41028a TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/fs/full/subvolume_rm.sh' |
||||||||||||||
fail | 6115326 | 2021-05-14 12:49:10 | 2021-05-14 12:49:53 | 2021-05-14 13:09:01 | 0:19:08 | 0:09:25 | 0:09:43 | smithi | master | centos | 8.3 | fs/full/{begin clusters/1-node-1-mds-1-osd conf/{client mds mon osd} distro/{centos_8} mount/fuse objectstore/bluestore-bitmap overrides overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/mgr-osd-full} | 1 | |
Failure Reason:
Command failed (workunit test fs/full/subvolume_rm.sh) on smithi167 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=62638798bcc99e8e8798e9ef35ca6ca77b41028a TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/fs/full/subvolume_rm.sh' |
||||||||||||||
fail | 6115327 | 2021-05-14 12:49:10 | 2021-05-14 12:49:54 | 2021-05-14 13:08:46 | 0:18:52 | 0:09:36 | 0:09:16 | smithi | master | centos | 8.3 | fs/full/{begin clusters/1-node-1-mds-1-osd conf/{client mds mon osd} distro/{centos_8} mount/fuse objectstore/bluestore-bitmap overrides overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/mgr-osd-full} | 1 | |
Failure Reason:
Command failed (workunit test fs/full/subvolume_rm.sh) on smithi139 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=62638798bcc99e8e8798e9ef35ca6ca77b41028a TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/fs/full/subvolume_rm.sh' |