User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Fail |
---|---|---|---|---|---|---|---|---|---|
khiremat | 2021-05-12 10:51:05 | 2021-05-12 10:52:09 | 2021-05-12 11:11:31 | 0:19:22 | fs | wip-khiremat-39910-mgr-hang-osd-full-testing-32 | smithi | e37143d | 3 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 6111419 | 2021-05-12 10:51:13 | 2021-05-12 10:52:08 | 2021-05-12 11:09:37 | 0:17:29 | 0:08:27 | 0:09:02 | smithi | master | ubuntu | 20.04 | fs/full/{begin clusters/1-node-1-mds-1-osd conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse objectstore/bluestore-bitmap overrides overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/mgr-osd-full} | 1 | |
Failure Reason:
Command failed (workunit test fs/full/subvolume_rm.sh) on smithi159 with status 2: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=e37143d14bcd0922286ef04700a614fe15b85b6d TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/fs/full/subvolume_rm.sh' |
||||||||||||||
fail | 6111420 | 2021-05-12 10:51:13 | 2021-05-12 10:52:09 | 2021-05-12 11:11:31 | 0:19:22 | 0:09:31 | 0:09:51 | smithi | master | ubuntu | 20.04 | fs/full/{begin clusters/1-node-1-mds-1-osd conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse objectstore/bluestore-bitmap overrides overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/mgr-osd-full} | 1 | |
Failure Reason:
Command failed (workunit test fs/full/subvolume_rm.sh) on smithi136 with status 2: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=e37143d14bcd0922286ef04700a614fe15b85b6d TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/fs/full/subvolume_rm.sh' |
||||||||||||||
fail | 6111421 | 2021-05-12 10:51:13 | 2021-05-12 10:52:09 | 2021-05-12 11:11:30 | 0:19:21 | 0:09:18 | 0:10:03 | smithi | master | ubuntu | 20.04 | fs/full/{begin clusters/1-node-1-mds-1-osd conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse objectstore/bluestore-bitmap overrides overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/mgr-osd-full} | 1 | |
Failure Reason:
Command failed (workunit test fs/full/subvolume_rm.sh) on smithi190 with status 2: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=e37143d14bcd0922286ef04700a614fe15b85b6d TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/fs/full/subvolume_rm.sh' |