Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 5245129 2020-07-20 19:18:05 2020-07-20 19:18:17 2020-07-20 20:44:19 1:26:02 1:14:54 0:11:08 smithi master centos 7.8 upgrade:nautilus-p2p/nautilus-p2p-parallel/{point-to-point-upgrade supported-all-distro/centos_latest} 3
fail 5245130 2020-07-20 19:18:06 2020-07-20 19:18:17 2020-07-20 23:46:27 4:28:10 4:14:54 0:13:16 smithi master centos 7.8 upgrade:nautilus-p2p/nautilus-p2p-stress-split/{0-cluster/{openstack start} 1-ceph-install/nautilus 1.1.short_pg_log 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{fsx radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 5-finish-upgrade 7-final-workload/{rbd-python rgw-swift snaps-many-objects} objectstore/bluestore-bitmap supported-all-distro/centos_latest thrashosds-health} 3
Failure Reason:

Command failed (workunit test rbd/test_librbd_python.sh) on smithi094 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=v14.2.2 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/test_librbd_python.sh'

fail 5245131 2020-07-20 19:18:07 2020-07-20 19:19:38 2020-07-20 23:25:46 4:06:08 3:40:44 0:25:24 smithi master rhel 7.8 upgrade:nautilus-p2p/nautilus-p2p-stress-split/{0-cluster/{openstack start} 1-ceph-install/nautilus 1.1.short_pg_log 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{fsx radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 5-finish-upgrade 7-final-workload/{rbd-python rgw-swift snaps-many-objects} objectstore/bluestore-comp-lz4 supported-all-distro/rhel_7 thrashosds-health} 3
Failure Reason:

Command failed (workunit test rbd/test_librbd_python.sh) on smithi077 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=v14.2.2 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/test_librbd_python.sh'

fail 5245132 2020-07-20 19:18:08 2020-07-20 19:19:48 2020-07-20 23:33:57 4:14:09 3:59:25 0:14:44 smithi master ubuntu 16.04 upgrade:nautilus-p2p/nautilus-p2p-stress-split/{0-cluster/{openstack start} 1-ceph-install/nautilus 1.1.short_pg_log 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{fsx radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 5-finish-upgrade 7-final-workload/{rbd-python rgw-swift snaps-many-objects} objectstore/bluestore-comp-snappy supported-all-distro/ubuntu_16.04 thrashosds-health} 3
Failure Reason:

Command failed (workunit test rbd/test_librbd_python.sh) on smithi062 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=v14.2.2 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/test_librbd_python.sh'

fail 5245133 2020-07-20 19:18:09 2020-07-20 19:20:03 2020-07-20 19:42:03 0:22:00 0:03:13 0:18:47 smithi master ubuntu 18.04 upgrade:nautilus-p2p/nautilus-p2p-stress-split/{0-cluster/{openstack start} 1-ceph-install/nautilus 1.1.short_pg_log 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{fsx radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 5-finish-upgrade 7-final-workload/{rbd-python rgw-swift snaps-many-objects} objectstore/bluestore-comp-zlib supported-all-distro/ubuntu_latest thrashosds-health} 3
Failure Reason:

Command failed on smithi180 with status 100: 'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=14.2.2-1bionic ceph-mds=14.2.2-1bionic ceph-mgr=14.2.2-1bionic ceph-common=14.2.2-1bionic ceph-fuse=14.2.2-1bionic ceph-test=14.2.2-1bionic radosgw=14.2.2-1bionic python-ceph=14.2.2-1bionic libcephfs2=14.2.2-1bionic libcephfs-dev=14.2.2-1bionic librados2=14.2.2-1bionic librbd1=14.2.2-1bionic rbd-fuse=14.2.2-1bionic librados2=14.2.2-1bionic'

fail 5245134 2020-07-20 19:18:10 2020-07-20 19:20:05 2020-07-20 23:46:15 4:26:10 4:14:48 0:11:22 smithi master centos 7.8 upgrade:nautilus-p2p/nautilus-p2p-stress-split/{0-cluster/{openstack start} 1-ceph-install/nautilus 1.1.short_pg_log 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{fsx radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 5-finish-upgrade 7-final-workload/{rbd-python rgw-swift snaps-many-objects} objectstore/bluestore-comp-zstd supported-all-distro/centos_latest thrashosds-health} 3
Failure Reason:

Command failed (workunit test rbd/test_librbd_python.sh) on smithi166 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=v14.2.2 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/test_librbd_python.sh'

fail 5245135 2020-07-20 19:18:11 2020-07-20 19:20:05 2020-07-20 23:12:12 3:52:07 3:42:13 0:09:54 smithi master rhel 7.8 upgrade:nautilus-p2p/nautilus-p2p-stress-split/{0-cluster/{openstack start} 1-ceph-install/nautilus 1.1.short_pg_log 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{fsx radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 5-finish-upgrade 7-final-workload/{rbd-python rgw-swift snaps-many-objects} objectstore/bluestore-stupid supported-all-distro/rhel_7 thrashosds-health} 3
Failure Reason:

Command failed (workunit test rbd/test_librbd_python.sh) on smithi196 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=v14.2.2 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/test_librbd_python.sh'

fail 5245136 2020-07-20 19:18:12 2020-07-20 19:20:12 2020-07-20 22:54:18 3:34:06 3:22:59 0:11:07 smithi master ubuntu 16.04 upgrade:nautilus-p2p/nautilus-p2p-stress-split/{0-cluster/{openstack start} 1-ceph-install/nautilus 1.1.short_pg_log 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{fsx radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 5-finish-upgrade 7-final-workload/{rbd-python rgw-swift snaps-many-objects} objectstore/filestore-xfs supported-all-distro/ubuntu_16.04 thrashosds-health} 3
Failure Reason:

Command failed (workunit test rbd/test_librbd_python.sh) on smithi075 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=v14.2.2 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/test_librbd_python.sh'

fail 5245137 2020-07-20 19:18:13 2020-07-20 19:20:26 2020-07-20 20:46:28 1:26:02 1:16:53 0:09:09 smithi master rhel 7.8 upgrade:nautilus-p2p/nautilus-p2p-parallel/{point-to-point-upgrade supported-all-distro/rhel_7} 3
Failure Reason:

"2020-07-20 20:08:46.950258 mds.a (mds.0) 1 : cluster [WRN] evicting unresponsive client smithi136:x (7578), after 304.285 seconds" in cluster log

fail 5245138 2020-07-20 19:18:14 2020-07-20 19:22:01 2020-07-20 19:38:01 0:16:00 0:03:13 0:12:47 smithi master ubuntu 18.04 upgrade:nautilus-p2p/nautilus-p2p-stress-split/{0-cluster/{openstack start} 1-ceph-install/nautilus 1.1.short_pg_log 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{fsx radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 5-finish-upgrade 7-final-workload/{rbd-python rgw-swift snaps-many-objects} objectstore/bluestore-bitmap supported-all-distro/ubuntu_latest thrashosds-health} 3
Failure Reason:

Command failed on smithi175 with status 100: 'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=14.2.2-1bionic ceph-mds=14.2.2-1bionic ceph-mgr=14.2.2-1bionic ceph-common=14.2.2-1bionic ceph-fuse=14.2.2-1bionic ceph-test=14.2.2-1bionic radosgw=14.2.2-1bionic python-ceph=14.2.2-1bionic libcephfs2=14.2.2-1bionic libcephfs-dev=14.2.2-1bionic librados2=14.2.2-1bionic librbd1=14.2.2-1bionic rbd-fuse=14.2.2-1bionic librados2=14.2.2-1bionic'

fail 5245139 2020-07-20 19:18:15 2020-07-20 19:22:01 2020-07-20 23:16:09 3:54:08 3:41:48 0:12:20 smithi master centos 7.8 upgrade:nautilus-p2p/nautilus-p2p-stress-split/{0-cluster/{openstack start} 1-ceph-install/nautilus 1.1.short_pg_log 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{fsx radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 5-finish-upgrade 7-final-workload/{rbd-python rgw-swift snaps-many-objects} objectstore/bluestore-comp-lz4 supported-all-distro/centos_latest thrashosds-health} 3
Failure Reason:

Command failed (workunit test rbd/test_librbd_python.sh) on smithi139 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=v14.2.2 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/test_librbd_python.sh'

fail 5245140 2020-07-20 19:18:16 2020-07-20 19:22:02 2020-07-20 23:28:09 4:06:07 3:49:51 0:16:16 smithi master rhel 7.8 upgrade:nautilus-p2p/nautilus-p2p-stress-split/{0-cluster/{openstack start} 1-ceph-install/nautilus 1.1.short_pg_log 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{fsx radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 5-finish-upgrade 7-final-workload/{rbd-python rgw-swift snaps-many-objects} objectstore/bluestore-comp-snappy supported-all-distro/rhel_7 thrashosds-health} 3
Failure Reason:

Command failed (workunit test rbd/test_librbd_python.sh) on smithi191 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=v14.2.2 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/test_librbd_python.sh'

fail 5245141 2020-07-20 19:18:17 2020-07-20 19:22:14 2020-07-20 23:42:24 4:20:10 3:56:47 0:23:23 smithi master ubuntu 16.04 upgrade:nautilus-p2p/nautilus-p2p-stress-split/{0-cluster/{openstack start} 1-ceph-install/nautilus 1.1.short_pg_log 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{fsx radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 5-finish-upgrade 7-final-workload/{rbd-python rgw-swift snaps-many-objects} objectstore/bluestore-comp-zlib supported-all-distro/ubuntu_16.04 thrashosds-health} 3
Failure Reason:

Command failed (workunit test rbd/test_librbd_python.sh) on smithi013 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=v14.2.2 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/test_librbd_python.sh'

fail 5245142 2020-07-20 19:18:18 2020-07-20 19:23:24 2020-07-20 20:01:24 0:38:00 0:03:17 0:34:43 smithi master ubuntu 18.04 upgrade:nautilus-p2p/nautilus-p2p-stress-split/{0-cluster/{openstack start} 1-ceph-install/nautilus 1.1.short_pg_log 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{fsx radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 5-finish-upgrade 7-final-workload/{rbd-python rgw-swift snaps-many-objects} objectstore/bluestore-comp-zstd supported-all-distro/ubuntu_latest thrashosds-health} 3
Failure Reason:

Command failed on smithi141 with status 100: 'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=14.2.2-1bionic ceph-mds=14.2.2-1bionic ceph-mgr=14.2.2-1bionic ceph-common=14.2.2-1bionic ceph-fuse=14.2.2-1bionic ceph-test=14.2.2-1bionic radosgw=14.2.2-1bionic python-ceph=14.2.2-1bionic libcephfs2=14.2.2-1bionic libcephfs-dev=14.2.2-1bionic librados2=14.2.2-1bionic librbd1=14.2.2-1bionic rbd-fuse=14.2.2-1bionic librados2=14.2.2-1bionic'

fail 5245143 2020-07-20 19:18:19 2020-07-20 19:24:00 2020-07-20 23:56:10 4:32:10 4:06:56 0:25:14 smithi master centos 7.8 upgrade:nautilus-p2p/nautilus-p2p-stress-split/{0-cluster/{openstack start} 1-ceph-install/nautilus 1.1.short_pg_log 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{fsx radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 5-finish-upgrade 7-final-workload/{rbd-python rgw-swift snaps-many-objects} objectstore/bluestore-stupid supported-all-distro/centos_latest thrashosds-health} 3
Failure Reason:

Command failed (workunit test rbd/test_librbd_python.sh) on smithi121 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=v14.2.2 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/test_librbd_python.sh'

fail 5245144 2020-07-20 19:18:20 2020-07-20 19:24:10 2020-07-20 23:16:17 3:52:07 3:21:19 0:30:48 smithi master rhel 7.8 upgrade:nautilus-p2p/nautilus-p2p-stress-split/{0-cluster/{openstack start} 1-ceph-install/nautilus 1.1.short_pg_log 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{fsx radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 5-finish-upgrade 7-final-workload/{rbd-python rgw-swift snaps-many-objects} objectstore/filestore-xfs supported-all-distro/rhel_7 thrashosds-health} 3
Failure Reason:

Command failed (workunit test rbd/test_librbd_python.sh) on smithi059 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=v14.2.2 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/test_librbd_python.sh'

pass 5245145 2020-07-20 19:18:21 2020-07-20 19:24:49 2020-07-20 20:48:51 1:24:02 1:12:49 0:11:13 smithi master ubuntu 16.04 upgrade:nautilus-p2p/nautilus-p2p-parallel/{point-to-point-upgrade supported-all-distro/ubuntu_16.04} 3
fail 5245146 2020-07-20 19:18:22 2020-07-20 19:26:16 2020-07-20 23:40:25 4:14:09 4:00:06 0:14:03 smithi master ubuntu 16.04 upgrade:nautilus-p2p/nautilus-p2p-stress-split/{0-cluster/{openstack start} 1-ceph-install/nautilus 1.1.short_pg_log 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{fsx radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 5-finish-upgrade 7-final-workload/{rbd-python rgw-swift snaps-many-objects} objectstore/bluestore-bitmap supported-all-distro/ubuntu_16.04 thrashosds-health} 3
Failure Reason:

Command failed (workunit test rbd/test_librbd_python.sh) on smithi005 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=v14.2.2 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/test_librbd_python.sh'

fail 5245147 2020-07-20 19:18:23 2020-07-20 19:26:16 2020-07-20 19:40:15 0:13:59 0:03:14 0:10:45 smithi master ubuntu 18.04 upgrade:nautilus-p2p/nautilus-p2p-stress-split/{0-cluster/{openstack start} 1-ceph-install/nautilus 1.1.short_pg_log 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{fsx radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 5-finish-upgrade 7-final-workload/{rbd-python rgw-swift snaps-many-objects} objectstore/bluestore-comp-lz4 supported-all-distro/ubuntu_latest thrashosds-health} 3
Failure Reason:

Command failed on smithi179 with status 100: 'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=14.2.2-1bionic ceph-mds=14.2.2-1bionic ceph-mgr=14.2.2-1bionic ceph-common=14.2.2-1bionic ceph-fuse=14.2.2-1bionic ceph-test=14.2.2-1bionic radosgw=14.2.2-1bionic python-ceph=14.2.2-1bionic libcephfs2=14.2.2-1bionic libcephfs-dev=14.2.2-1bionic librados2=14.2.2-1bionic librbd1=14.2.2-1bionic rbd-fuse=14.2.2-1bionic librados2=14.2.2-1bionic'

fail 5245148 2020-07-20 19:18:24 2020-07-20 19:28:29 2020-07-20 23:28:36 4:00:07 3:44:45 0:15:22 smithi master centos 7.8 upgrade:nautilus-p2p/nautilus-p2p-stress-split/{0-cluster/{openstack start} 1-ceph-install/nautilus 1.1.short_pg_log 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{fsx radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 5-finish-upgrade 7-final-workload/{rbd-python rgw-swift snaps-many-objects} objectstore/bluestore-comp-snappy supported-all-distro/centos_latest thrashosds-health} 3
Failure Reason:

Command failed (workunit test rbd/test_librbd_python.sh) on smithi051 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=v14.2.2 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/test_librbd_python.sh'

fail 5245149 2020-07-20 19:18:25 2020-07-20 19:28:29 2020-07-20 23:30:38 4:02:09 3:49:21 0:12:48 smithi master rhel 7.8 upgrade:nautilus-p2p/nautilus-p2p-stress-split/{0-cluster/{openstack start} 1-ceph-install/nautilus 1.1.short_pg_log 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{fsx radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 5-finish-upgrade 7-final-workload/{rbd-python rgw-swift snaps-many-objects} objectstore/bluestore-comp-zlib supported-all-distro/rhel_7 thrashosds-health} 3
Failure Reason:

Command failed (workunit test rbd/test_librbd_python.sh) on smithi198 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=v14.2.2 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/test_librbd_python.sh'

fail 5245150 2020-07-20 19:18:26 2020-07-20 19:32:04 2020-07-20 23:48:13 4:16:09 4:00:24 0:15:45 smithi master ubuntu 16.04 upgrade:nautilus-p2p/nautilus-p2p-stress-split/{0-cluster/{openstack start} 1-ceph-install/nautilus 1.1.short_pg_log 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{fsx radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 5-finish-upgrade 7-final-workload/{rbd-python rgw-swift snaps-many-objects} objectstore/bluestore-comp-zstd supported-all-distro/ubuntu_16.04 thrashosds-health} 3
Failure Reason:

Command failed (workunit test rbd/test_librbd_python.sh) on smithi176 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=v14.2.2 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/test_librbd_python.sh'

fail 5245151 2020-07-20 19:18:27 2020-07-20 19:32:04 2020-07-20 19:44:03 0:11:59 0:03:17 0:08:42 smithi master ubuntu 18.04 upgrade:nautilus-p2p/nautilus-p2p-stress-split/{0-cluster/{openstack start} 1-ceph-install/nautilus 1.1.short_pg_log 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{fsx radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 5-finish-upgrade 7-final-workload/{rbd-python rgw-swift snaps-many-objects} objectstore/bluestore-stupid supported-all-distro/ubuntu_latest thrashosds-health} 3
Failure Reason:

Command failed on smithi161 with status 100: 'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=14.2.2-1bionic ceph-mds=14.2.2-1bionic ceph-mgr=14.2.2-1bionic ceph-common=14.2.2-1bionic ceph-fuse=14.2.2-1bionic ceph-test=14.2.2-1bionic radosgw=14.2.2-1bionic python-ceph=14.2.2-1bionic libcephfs2=14.2.2-1bionic libcephfs-dev=14.2.2-1bionic librados2=14.2.2-1bionic librbd1=14.2.2-1bionic rbd-fuse=14.2.2-1bionic librados2=14.2.2-1bionic'

fail 5245152 2020-07-20 19:18:28 2020-07-20 19:32:14 2020-07-20 23:12:21 3:40:07 3:29:14 0:10:53 smithi master centos 7.8 upgrade:nautilus-p2p/nautilus-p2p-stress-split/{0-cluster/{openstack start} 1-ceph-install/nautilus 1.1.short_pg_log 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{fsx radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 5-finish-upgrade 7-final-workload/{rbd-python rgw-swift snaps-many-objects} objectstore/filestore-xfs supported-all-distro/centos_latest thrashosds-health} 3
Failure Reason:

Command failed (workunit test rbd/test_librbd_python.sh) on smithi036 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=v14.2.2 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/test_librbd_python.sh'

fail 5245153 2020-07-20 19:18:29 2020-07-20 19:33:54 2020-07-20 20:05:54 0:32:00 0:03:24 0:28:36 smithi master ubuntu 18.04 upgrade:nautilus-p2p/nautilus-p2p-parallel/{point-to-point-upgrade supported-all-distro/ubuntu_latest} 3
Failure Reason:

Command failed on smithi164 with status 100: 'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=14.2.2-1bionic ceph-mds=14.2.2-1bionic ceph-mgr=14.2.2-1bionic ceph-common=14.2.2-1bionic ceph-fuse=14.2.2-1bionic ceph-test=14.2.2-1bionic radosgw=14.2.2-1bionic python-ceph=14.2.2-1bionic libcephfs2=14.2.2-1bionic libcephfs-dev=14.2.2-1bionic librados2=14.2.2-1bionic librbd1=14.2.2-1bionic rbd-fuse=14.2.2-1bionic'

fail 5245154 2020-07-20 19:18:30 2020-07-20 19:33:54 2020-07-20 23:52:04 4:18:10 4:03:25 0:14:45 smithi master rhel 7.8 upgrade:nautilus-p2p/nautilus-p2p-stress-split/{0-cluster/{openstack start} 1-ceph-install/nautilus 1.1.short_pg_log 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{fsx radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 5-finish-upgrade 7-final-workload/{rbd-python rgw-swift snaps-many-objects} objectstore/bluestore-bitmap supported-all-distro/rhel_7 thrashosds-health} 3
Failure Reason:

Command failed (workunit test rbd/test_librbd_python.sh) on smithi180 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=v14.2.2 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/test_librbd_python.sh'

fail 5245155 2020-07-20 19:18:31 2020-07-20 19:34:03 2020-07-20 23:52:13 4:18:10 4:01:26 0:16:44 smithi master ubuntu 16.04 upgrade:nautilus-p2p/nautilus-p2p-stress-split/{0-cluster/{openstack start} 1-ceph-install/nautilus 1.1.short_pg_log 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{fsx radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 5-finish-upgrade 7-final-workload/{rbd-python rgw-swift snaps-many-objects} objectstore/bluestore-comp-lz4 supported-all-distro/ubuntu_16.04 thrashosds-health} 3
Failure Reason:

Command failed (workunit test rbd/test_librbd_python.sh) on smithi078 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=v14.2.2 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/test_librbd_python.sh'

fail 5245156 2020-07-20 19:18:32 2020-07-20 19:34:12 2020-07-20 19:52:12 0:18:00 0:03:23 0:14:37 smithi master ubuntu 18.04 upgrade:nautilus-p2p/nautilus-p2p-stress-split/{0-cluster/{openstack start} 1-ceph-install/nautilus 1.1.short_pg_log 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{fsx radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 5-finish-upgrade 7-final-workload/{rbd-python rgw-swift snaps-many-objects} objectstore/bluestore-comp-snappy supported-all-distro/ubuntu_latest thrashosds-health} 3
Failure Reason:

Command failed on smithi141 with status 100: 'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=14.2.2-1bionic ceph-mds=14.2.2-1bionic ceph-mgr=14.2.2-1bionic ceph-common=14.2.2-1bionic ceph-fuse=14.2.2-1bionic ceph-test=14.2.2-1bionic radosgw=14.2.2-1bionic python-ceph=14.2.2-1bionic libcephfs2=14.2.2-1bionic libcephfs-dev=14.2.2-1bionic librados2=14.2.2-1bionic librbd1=14.2.2-1bionic rbd-fuse=14.2.2-1bionic librados2=14.2.2-1bionic'

fail 5245157 2020-07-20 19:18:33 2020-07-20 19:36:01 2020-07-20 23:26:08 3:50:07 3:36:07 0:14:00 smithi master centos 7.8 upgrade:nautilus-p2p/nautilus-p2p-stress-split/{0-cluster/{openstack start} 1-ceph-install/nautilus 1.1.short_pg_log 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{fsx radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 5-finish-upgrade 7-final-workload/{rbd-python rgw-swift snaps-many-objects} objectstore/bluestore-comp-zlib supported-all-distro/centos_latest thrashosds-health} 3
Failure Reason:

Command failed (workunit test rbd/test_librbd_python.sh) on smithi151 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=v14.2.2 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/test_librbd_python.sh'

fail 5245158 2020-07-20 19:18:34 2020-07-20 19:36:14 2020-07-20 23:30:22 3:54:08 3:47:47 0:06:21 smithi master rhel 7.8 upgrade:nautilus-p2p/nautilus-p2p-stress-split/{0-cluster/{openstack start} 1-ceph-install/nautilus 1.1.short_pg_log 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{fsx radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 5-finish-upgrade 7-final-workload/{rbd-python rgw-swift snaps-many-objects} objectstore/bluestore-comp-zstd supported-all-distro/rhel_7 thrashosds-health} 3
Failure Reason:

Command failed (workunit test rbd/test_librbd_python.sh) on smithi205 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=v14.2.2 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/test_librbd_python.sh'

fail 5245159 2020-07-20 19:18:35 2020-07-20 19:38:10 2020-07-20 23:32:18 3:54:08 3:43:11 0:10:57 smithi master ubuntu 16.04 upgrade:nautilus-p2p/nautilus-p2p-stress-split/{0-cluster/{openstack start} 1-ceph-install/nautilus 1.1.short_pg_log 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{fsx radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 5-finish-upgrade 7-final-workload/{rbd-python rgw-swift snaps-many-objects} objectstore/bluestore-stupid supported-all-distro/ubuntu_16.04 thrashosds-health} 3
Failure Reason:

Command failed (workunit test rbd/test_librbd_python.sh) on smithi093 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=v14.2.2 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/test_librbd_python.sh'

fail 5245160 2020-07-20 19:18:35 2020-07-20 19:38:10 2020-07-20 19:50:09 0:11:59 0:03:13 0:08:46 smithi master ubuntu 18.04 upgrade:nautilus-p2p/nautilus-p2p-stress-split/{0-cluster/{openstack start} 1-ceph-install/nautilus 1.1.short_pg_log 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{fsx radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 5-finish-upgrade 7-final-workload/{rbd-python rgw-swift snaps-many-objects} objectstore/filestore-xfs supported-all-distro/ubuntu_latest thrashosds-health} 3
Failure Reason:

Command failed on smithi115 with status 100: 'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=14.2.2-1bionic ceph-mds=14.2.2-1bionic ceph-mgr=14.2.2-1bionic ceph-common=14.2.2-1bionic ceph-fuse=14.2.2-1bionic ceph-test=14.2.2-1bionic radosgw=14.2.2-1bionic python-ceph=14.2.2-1bionic libcephfs2=14.2.2-1bionic libcephfs-dev=14.2.2-1bionic librados2=14.2.2-1bionic librbd1=14.2.2-1bionic rbd-fuse=14.2.2-1bionic librados2=14.2.2-1bionic'