Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 5249863 2020-07-23 01:25:37 2020-07-25 23:34:19 2020-07-26 01:06:20 1:32:01 1:17:04 0:14:57 smithi master centos 7.8 upgrade:nautilus-p2p/nautilus-p2p-parallel/{point-to-point-upgrade supported-all-distro/centos_latest} 3
fail 5249865 2020-07-23 01:25:38 2020-07-25 23:34:20 2020-07-26 05:12:30 5:38:10 3:43:00 1:55:10 smithi master centos 7.8 upgrade:nautilus-p2p/nautilus-p2p-stress-split/{0-cluster/{openstack start} 1-ceph-install/nautilus 1.1.short_pg_log 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{fsx radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 5-finish-upgrade 7-final-workload/{rbd-python rgw-swift snaps-many-objects} objectstore/bluestore-bitmap supported-all-distro/centos_latest thrashosds-health} 3
Failure Reason:

Command failed (workunit test rbd/test_librbd_python.sh) on smithi186 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=v14.2.2 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/test_librbd_python.sh'

fail 5249866 2020-07-23 01:25:39 2020-07-25 23:34:20 2020-07-26 04:20:28 4:46:08 3:49:12 0:56:56 smithi master rhel 7.8 upgrade:nautilus-p2p/nautilus-p2p-stress-split/{0-cluster/{openstack start} 1-ceph-install/nautilus 1.1.short_pg_log 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{fsx radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 5-finish-upgrade 7-final-workload/{rbd-python rgw-swift snaps-many-objects} objectstore/bluestore-comp-lz4 supported-all-distro/rhel_7 thrashosds-health} 3
Failure Reason:

Command failed (workunit test rbd/test_librbd_python.sh) on smithi069 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=v14.2.2 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/test_librbd_python.sh'

fail 5249867 2020-07-23 01:25:40 2020-07-25 23:34:19 2020-07-26 03:24:25 3:50:06 3:31:56 0:18:10 smithi master ubuntu 16.04 upgrade:nautilus-p2p/nautilus-p2p-stress-split/{0-cluster/{openstack start} 1-ceph-install/nautilus 1.1.short_pg_log 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{fsx radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 5-finish-upgrade 7-final-workload/{rbd-python rgw-swift snaps-many-objects} objectstore/bluestore-comp-snappy supported-all-distro/ubuntu_16.04 thrashosds-health} 3
Failure Reason:

Command failed (workunit test rbd/test_librbd_python.sh) on smithi050 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=v14.2.2 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/test_librbd_python.sh'

fail 5249868 2020-07-23 01:25:41 2020-07-25 23:34:20 2020-07-26 03:36:27 4:02:07 3:39:45 0:22:22 smithi master ubuntu 18.04 upgrade:nautilus-p2p/nautilus-p2p-stress-split/{0-cluster/{openstack start} 1-ceph-install/nautilus 1.1.short_pg_log 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{fsx radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 5-finish-upgrade 7-final-workload/{rbd-python rgw-swift snaps-many-objects} objectstore/bluestore-comp-zlib supported-all-distro/ubuntu_latest thrashosds-health} 3
Failure Reason:

Command failed (workunit test rbd/test_librbd_python.sh) on smithi119 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=v14.2.2 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/test_librbd_python.sh'

fail 5249869 2020-07-23 01:25:42 2020-07-25 23:34:20 2020-07-26 04:06:28 4:32:08 3:57:00 0:35:08 smithi master centos 7.8 upgrade:nautilus-p2p/nautilus-p2p-stress-split/{0-cluster/{openstack start} 1-ceph-install/nautilus 1.1.short_pg_log 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{fsx radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 5-finish-upgrade 7-final-workload/{rbd-python rgw-swift snaps-many-objects} objectstore/bluestore-comp-zstd supported-all-distro/centos_latest thrashosds-health} 3
Failure Reason:

Command failed (workunit test rbd/test_librbd_python.sh) on smithi051 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=v14.2.2 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/test_librbd_python.sh'

fail 5249870 2020-07-23 01:25:43 2020-07-25 23:34:20 2020-07-26 04:28:29 4:54:09 3:40:04 1:14:05 smithi master rhel 7.8 upgrade:nautilus-p2p/nautilus-p2p-stress-split/{0-cluster/{openstack start} 1-ceph-install/nautilus 1.1.short_pg_log 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{fsx radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 5-finish-upgrade 7-final-workload/{rbd-python rgw-swift snaps-many-objects} objectstore/bluestore-stupid supported-all-distro/rhel_7 thrashosds-health} 3
Failure Reason:

Command failed (workunit test rbd/test_librbd_python.sh) on smithi181 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=v14.2.2 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/test_librbd_python.sh'

fail 5249871 2020-07-23 01:25:43 2020-07-25 23:34:19 2020-07-26 03:26:25 3:52:06 3:20:30 0:31:36 smithi master ubuntu 16.04 upgrade:nautilus-p2p/nautilus-p2p-stress-split/{0-cluster/{openstack start} 1-ceph-install/nautilus 1.1.short_pg_log 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{fsx radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 5-finish-upgrade 7-final-workload/{rbd-python rgw-swift snaps-many-objects} objectstore/filestore-xfs supported-all-distro/ubuntu_16.04 thrashosds-health} 3
Failure Reason:

Command failed (workunit test rbd/test_librbd_python.sh) on smithi176 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=v14.2.2 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/test_librbd_python.sh'

dead 5249872 2020-07-23 01:25:44 2020-07-25 23:34:20 2020-07-25 23:54:20 0:20:00 0:05:29 0:14:31 smithi master rhel 7.8 upgrade:nautilus-p2p/nautilus-p2p-parallel/{point-to-point-upgrade supported-all-distro/rhel_7} 3
Failure Reason:

{'smithi081.front.sepia.ceph.com': {'attempts': 12, 'censored': "the output has been hidden due to the fact that 'no_log: true' was specified for this result", 'changed': True}, 'smithi187.front.sepia.ceph.com': {'attempts': 12, 'censored': "the output has been hidden due to the fact that 'no_log: true' was specified for this result", 'changed': True}}

fail 5249873 2020-07-23 01:25:45 2020-07-25 23:36:19 2020-07-26 03:48:26 4:12:07 3:56:41 0:15:26 smithi master ubuntu 18.04 upgrade:nautilus-p2p/nautilus-p2p-stress-split/{0-cluster/{openstack start} 1-ceph-install/nautilus 1.1.short_pg_log 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{fsx radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 5-finish-upgrade 7-final-workload/{rbd-python rgw-swift snaps-many-objects} objectstore/bluestore-bitmap supported-all-distro/ubuntu_latest thrashosds-health} 3
Failure Reason:

Command failed (workunit test rbd/test_librbd_python.sh) on smithi160 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=v14.2.2 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/test_librbd_python.sh'

fail 5249874 2020-07-23 01:25:46 2020-07-25 23:36:20 2020-07-26 04:56:29 5:20:09 3:47:03 1:33:06 smithi master centos 7.8 upgrade:nautilus-p2p/nautilus-p2p-stress-split/{0-cluster/{openstack start} 1-ceph-install/nautilus 1.1.short_pg_log 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{fsx radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 5-finish-upgrade 7-final-workload/{rbd-python rgw-swift snaps-many-objects} objectstore/bluestore-comp-lz4 supported-all-distro/centos_latest thrashosds-health} 3
Failure Reason:

Command failed (workunit test rbd/test_librbd_python.sh) on smithi038 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=v14.2.2 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/test_librbd_python.sh'

fail 5249875 2020-07-23 01:25:47 2020-07-25 23:36:19 2020-07-26 03:26:25 3:50:06 3:39:49 0:10:17 smithi master rhel 7.8 upgrade:nautilus-p2p/nautilus-p2p-stress-split/{0-cluster/{openstack start} 1-ceph-install/nautilus 1.1.short_pg_log 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{fsx radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 5-finish-upgrade 7-final-workload/{rbd-python rgw-swift snaps-many-objects} objectstore/bluestore-comp-snappy supported-all-distro/rhel_7 thrashosds-health} 3
Failure Reason:

Command failed (workunit test rbd/test_librbd_python.sh) on smithi163 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=v14.2.2 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/test_librbd_python.sh'

fail 5249876 2020-07-23 01:25:48 2020-07-25 23:36:20 2020-07-26 04:28:28 4:52:08 3:17:46 1:34:22 smithi master ubuntu 16.04 upgrade:nautilus-p2p/nautilus-p2p-stress-split/{0-cluster/{openstack start} 1-ceph-install/nautilus 1.1.short_pg_log 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{fsx radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 5-finish-upgrade 7-final-workload/{rbd-python rgw-swift snaps-many-objects} objectstore/bluestore-comp-zlib supported-all-distro/ubuntu_16.04 thrashosds-health} 3
Failure Reason:

Command failed (workunit test rbd/test_librbd_python.sh) on smithi200 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=v14.2.2 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/test_librbd_python.sh'

fail 5249877 2020-07-23 01:25:49 2020-07-25 23:38:10 2020-07-26 05:42:21 6:04:11 3:49:26 2:14:45 smithi master ubuntu 18.04 upgrade:nautilus-p2p/nautilus-p2p-stress-split/{0-cluster/{openstack start} 1-ceph-install/nautilus 1.1.short_pg_log 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{fsx radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 5-finish-upgrade 7-final-workload/{rbd-python rgw-swift snaps-many-objects} objectstore/bluestore-comp-zstd supported-all-distro/ubuntu_latest thrashosds-health} 3
Failure Reason:

Command failed (workunit test rbd/test_librbd_python.sh) on smithi154 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=v14.2.2 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/test_librbd_python.sh'

fail 5249878 2020-07-23 01:25:50 2020-07-25 23:38:10 2020-07-26 03:48:17 4:10:07 3:57:23 0:12:44 smithi master centos 7.8 upgrade:nautilus-p2p/nautilus-p2p-stress-split/{0-cluster/{openstack start} 1-ceph-install/nautilus 1.1.short_pg_log 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{fsx radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 5-finish-upgrade 7-final-workload/{rbd-python rgw-swift snaps-many-objects} objectstore/bluestore-stupid supported-all-distro/centos_latest thrashosds-health} 3
Failure Reason:

Command failed (workunit test rbd/test_librbd_python.sh) on smithi105 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=v14.2.2 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/test_librbd_python.sh'

fail 5249879 2020-07-23 01:25:51 2020-07-25 23:38:10 2020-07-26 04:10:18 4:32:08 3:02:31 1:29:37 smithi master rhel 7.8 upgrade:nautilus-p2p/nautilus-p2p-stress-split/{0-cluster/{openstack start} 1-ceph-install/nautilus 1.1.short_pg_log 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{fsx radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 5-finish-upgrade 7-final-workload/{rbd-python rgw-swift snaps-many-objects} objectstore/filestore-xfs supported-all-distro/rhel_7 thrashosds-health} 3
Failure Reason:

Command failed (workunit test rbd/test_librbd_python.sh) on smithi143 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=v14.2.2 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/test_librbd_python.sh'

pass 5249880 2020-07-23 01:25:52 2020-07-25 23:38:10 2020-07-26 01:02:11 1:24:01 1:13:29 0:10:32 smithi master ubuntu 16.04 upgrade:nautilus-p2p/nautilus-p2p-parallel/{point-to-point-upgrade supported-all-distro/ubuntu_16.04} 3
fail 5249881 2020-07-23 01:25:53 2020-07-25 23:38:10 2020-07-26 04:20:19 4:42:09 3:23:45 1:18:24 smithi master ubuntu 16.04 upgrade:nautilus-p2p/nautilus-p2p-stress-split/{0-cluster/{openstack start} 1-ceph-install/nautilus 1.1.short_pg_log 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{fsx radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 5-finish-upgrade 7-final-workload/{rbd-python rgw-swift snaps-many-objects} objectstore/bluestore-bitmap supported-all-distro/ubuntu_16.04 thrashosds-health} 3
Failure Reason:

Command failed (workunit test rbd/test_librbd_python.sh) on smithi071 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=v14.2.2 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/test_librbd_python.sh'

fail 5249882 2020-07-23 01:25:54 2020-07-25 23:38:10 2020-07-26 04:18:18 4:40:08 3:48:08 0:52:00 smithi master ubuntu 18.04 upgrade:nautilus-p2p/nautilus-p2p-stress-split/{0-cluster/{openstack start} 1-ceph-install/nautilus 1.1.short_pg_log 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{fsx radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 5-finish-upgrade 7-final-workload/{rbd-python rgw-swift snaps-many-objects} objectstore/bluestore-comp-lz4 supported-all-distro/ubuntu_latest thrashosds-health} 3
Failure Reason:

Command failed (workunit test rbd/test_librbd_python.sh) on smithi112 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=v14.2.2 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/test_librbd_python.sh'

fail 5249883 2020-07-23 01:25:55 2020-07-25 23:40:09 2020-07-26 03:54:16 4:14:07 3:55:26 0:18:41 smithi master centos 7.8 upgrade:nautilus-p2p/nautilus-p2p-stress-split/{0-cluster/{openstack start} 1-ceph-install/nautilus 1.1.short_pg_log 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{fsx radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 5-finish-upgrade 7-final-workload/{rbd-python rgw-swift snaps-many-objects} objectstore/bluestore-comp-snappy supported-all-distro/centos_latest thrashosds-health} 3
Failure Reason:

Command failed (workunit test rbd/test_librbd_python.sh) on smithi191 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=v14.2.2 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/test_librbd_python.sh'

fail 5249884 2020-07-23 01:25:56 2020-07-25 23:40:10 2020-07-26 03:22:16 3:42:06 3:30:41 0:11:25 smithi master rhel 7.8 upgrade:nautilus-p2p/nautilus-p2p-stress-split/{0-cluster/{openstack start} 1-ceph-install/nautilus 1.1.short_pg_log 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{fsx radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 5-finish-upgrade 7-final-workload/{rbd-python rgw-swift snaps-many-objects} objectstore/bluestore-comp-zlib supported-all-distro/rhel_7 thrashosds-health} 3
Failure Reason:

Command failed (workunit test rbd/test_librbd_python.sh) on smithi129 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=v14.2.2 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/test_librbd_python.sh'

fail 5249885 2020-07-23 01:25:57 2020-07-25 23:40:09 2020-07-26 03:26:15 3:46:06 3:26:49 0:19:17 smithi master ubuntu 16.04 upgrade:nautilus-p2p/nautilus-p2p-stress-split/{0-cluster/{openstack start} 1-ceph-install/nautilus 1.1.short_pg_log 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{fsx radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 5-finish-upgrade 7-final-workload/{rbd-python rgw-swift snaps-many-objects} objectstore/bluestore-comp-zstd supported-all-distro/ubuntu_16.04 thrashosds-health} 3
Failure Reason:

Command failed (workunit test rbd/test_librbd_python.sh) on smithi125 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=v14.2.2 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/test_librbd_python.sh'

fail 5249886 2020-07-23 01:25:58 2020-07-25 23:40:10 2020-07-26 03:40:16 4:00:06 3:39:24 0:20:42 smithi master ubuntu 18.04 upgrade:nautilus-p2p/nautilus-p2p-stress-split/{0-cluster/{openstack start} 1-ceph-install/nautilus 1.1.short_pg_log 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{fsx radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 5-finish-upgrade 7-final-workload/{rbd-python rgw-swift snaps-many-objects} objectstore/bluestore-stupid supported-all-distro/ubuntu_latest thrashosds-health} 3
Failure Reason:

Command failed (workunit test rbd/test_librbd_python.sh) on smithi115 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=v14.2.2 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/test_librbd_python.sh'

fail 5249887 2020-07-23 01:25:59 2020-07-25 23:40:18 2020-07-26 02:46:23 3:06:05 2:54:37 0:11:28 smithi master centos 7.8 upgrade:nautilus-p2p/nautilus-p2p-stress-split/{0-cluster/{openstack start} 1-ceph-install/nautilus 1.1.short_pg_log 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{fsx radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 5-finish-upgrade 7-final-workload/{rbd-python rgw-swift snaps-many-objects} objectstore/filestore-xfs supported-all-distro/centos_latest thrashosds-health} 3
Failure Reason:

Command failed (workunit test rbd/test_librbd_python.sh) on smithi106 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=v14.2.2 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/test_librbd_python.sh'

fail 5249888 2020-07-23 01:26:00 2020-07-25 23:40:35 2020-07-26 01:08:37 1:28:02 1:13:13 0:14:49 smithi master ubuntu 18.04 upgrade:nautilus-p2p/nautilus-p2p-parallel/{point-to-point-upgrade supported-all-distro/ubuntu_latest} 3
Failure Reason:

"2020-07-26 00:27:01.158860 mds.a (mds.0) 1 : cluster [WRN] evicting unresponsive client smithi055:x (6395), after 301.078 seconds" in cluster log

fail 5249889 2020-07-23 01:26:01 2020-07-25 23:42:00 2020-07-26 04:02:07 4:20:07 3:31:59 0:48:08 smithi master rhel 7.8 upgrade:nautilus-p2p/nautilus-p2p-stress-split/{0-cluster/{openstack start} 1-ceph-install/nautilus 1.1.short_pg_log 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{fsx radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 5-finish-upgrade 7-final-workload/{rbd-python rgw-swift snaps-many-objects} objectstore/bluestore-bitmap supported-all-distro/rhel_7 thrashosds-health} 3
Failure Reason:

Command failed (workunit test rbd/test_librbd_python.sh) on smithi032 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=v14.2.2 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/test_librbd_python.sh'

fail 5249890 2020-07-23 01:26:02 2020-07-25 23:42:00 2020-07-26 04:08:07 4:26:07 3:38:36 0:47:31 smithi master ubuntu 16.04 upgrade:nautilus-p2p/nautilus-p2p-stress-split/{0-cluster/{openstack start} 1-ceph-install/nautilus 1.1.short_pg_log 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{fsx radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 5-finish-upgrade 7-final-workload/{rbd-python rgw-swift snaps-many-objects} objectstore/bluestore-comp-lz4 supported-all-distro/ubuntu_16.04 thrashosds-health} 3
Failure Reason:

Command failed (workunit test rbd/test_librbd_python.sh) on smithi014 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=v14.2.2 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/test_librbd_python.sh'

fail 5249891 2020-07-23 01:26:03 2020-07-25 23:42:01 2020-07-26 04:56:10 5:14:09 4:08:33 1:05:36 smithi master ubuntu 18.04 upgrade:nautilus-p2p/nautilus-p2p-stress-split/{0-cluster/{openstack start} 1-ceph-install/nautilus 1.1.short_pg_log 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{fsx radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 5-finish-upgrade 7-final-workload/{rbd-python rgw-swift snaps-many-objects} objectstore/bluestore-comp-snappy supported-all-distro/ubuntu_latest thrashosds-health} 3
Failure Reason:

Command failed (workunit test rbd/test_librbd_python.sh) on smithi060 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=v14.2.2 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/test_librbd_python.sh'

fail 5249892 2020-07-23 01:26:04 2020-07-25 23:42:01 2020-07-26 03:50:07 4:08:06 3:38:54 0:29:12 smithi master centos 7.8 upgrade:nautilus-p2p/nautilus-p2p-stress-split/{0-cluster/{openstack start} 1-ceph-install/nautilus 1.1.short_pg_log 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{fsx radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 5-finish-upgrade 7-final-workload/{rbd-python rgw-swift snaps-many-objects} objectstore/bluestore-comp-zlib supported-all-distro/centos_latest thrashosds-health} 3
Failure Reason:

Command failed (workunit test rbd/test_librbd_python.sh) on smithi085 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=v14.2.2 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/test_librbd_python.sh'

fail 5249893 2020-07-23 01:26:05 2020-07-25 23:44:26 2020-07-26 03:48:33 4:04:07 3:54:02 0:10:05 smithi master rhel 7.8 upgrade:nautilus-p2p/nautilus-p2p-stress-split/{0-cluster/{openstack start} 1-ceph-install/nautilus 1.1.short_pg_log 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{fsx radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 5-finish-upgrade 7-final-workload/{rbd-python rgw-swift snaps-many-objects} objectstore/bluestore-comp-zstd supported-all-distro/rhel_7 thrashosds-health} 3
Failure Reason:

Command failed (workunit test rbd/test_librbd_python.sh) on smithi097 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=v14.2.2 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/test_librbd_python.sh'

fail 5249894 2020-07-23 01:26:06 2020-07-25 23:44:26 2020-07-26 03:24:32 3:40:06 3:26:10 0:13:56 smithi master ubuntu 16.04 upgrade:nautilus-p2p/nautilus-p2p-stress-split/{0-cluster/{openstack start} 1-ceph-install/nautilus 1.1.short_pg_log 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{fsx radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 5-finish-upgrade 7-final-workload/{rbd-python rgw-swift snaps-many-objects} objectstore/bluestore-stupid supported-all-distro/ubuntu_16.04 thrashosds-health} 3
Failure Reason:

Command failed (workunit test rbd/test_librbd_python.sh) on smithi188 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=v14.2.2 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/test_librbd_python.sh'

fail 5249895 2020-07-23 01:26:07 2020-07-25 23:46:20 2020-07-26 03:18:25 3:32:05 3:17:50 0:14:15 smithi master ubuntu 18.04 upgrade:nautilus-p2p/nautilus-p2p-stress-split/{0-cluster/{openstack start} 1-ceph-install/nautilus 1.1.short_pg_log 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{fsx radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 5-finish-upgrade 7-final-workload/{rbd-python rgw-swift snaps-many-objects} objectstore/filestore-xfs supported-all-distro/ubuntu_latest thrashosds-health} 3
Failure Reason:

Command failed (workunit test rbd/test_librbd_python.sh) on smithi075 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=v14.2.2 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/test_librbd_python.sh'