Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
dead 4711492 2020-01-27 15:35:44 2020-01-28 18:46:11 2020-01-28 20:30:12 1:44:01 smithi master powercycle/osd/{clusters/3osd-1per-target.yaml objectstore/bluestore-comp.yaml powercycle/default.yaml tasks/cfuse_workunit_kernel_untar_build.yaml thrashosds-health.yaml whitelist_health.yaml} 3
Failure Reason:

SSH connection to smithi163 was lost: 'uname -r'

fail 4711493 2020-01-27 15:35:45 2020-01-28 18:46:15 2020-01-28 21:28:19 2:42:04 2:10:39 0:31:25 smithi master powercycle/osd/{clusters/3osd-1per-target.yaml objectstore/bluestore-stupid.yaml powercycle/default.yaml tasks/radosbench.yaml thrashosds-health.yaml whitelist_health.yaml} 4
Failure Reason:

reached maximum tries (500) after waiting for 3000 seconds

fail 4711494 2020-01-27 15:35:46 2020-01-28 18:47:37 2020-01-28 20:57:38 2:10:01 0:23:16 1:46:45 smithi master powercycle/osd/{clusters/3osd-1per-target.yaml objectstore/filestore-xfs.yaml powercycle/default.yaml tasks/cfuse_workunit_kernel_untar_build.yaml thrashosds-health.yaml whitelist_health.yaml} 4
Failure Reason:

Command failed (workunit test kernel_untar_build.sh) on smithi101 with status 2: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=2a788890d42cb111c71cfb747fd571f65c72b7f6 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/kernel_untar_build.sh'

pass 4711495 2020-01-27 15:35:47 2020-01-28 18:47:37 2020-01-28 21:01:39 2:14:02 0:28:06 1:45:56 smithi master powercycle/osd/{clusters/3osd-1per-target.yaml objectstore/bluestore-bitmap.yaml powercycle/default.yaml tasks/cfuse_workunit_misc.yaml thrashosds-health.yaml whitelist_health.yaml} 4
fail 4711496 2020-01-27 15:35:48 2020-01-28 18:47:37 2020-01-28 23:59:43 5:12:06 4:03:13 1:08:53 smithi master powercycle/osd/{clusters/3osd-1per-target.yaml objectstore/bluestore-bitmap.yaml powercycle/default.yaml tasks/radosbench.yaml thrashosds-health.yaml whitelist_health.yaml} 4
Failure Reason:

reached maximum tries (500) after waiting for 3000 seconds

fail 4711497 2020-01-27 15:35:49 2020-01-28 18:48:11 2020-01-28 20:06:12 1:18:01 0:27:17 0:50:44 smithi master powercycle/osd/{clusters/3osd-1per-target.yaml objectstore/bluestore-bitmap.yaml powercycle/default.yaml tasks/cfuse_workunit_kernel_untar_build.yaml thrashosds-health.yaml whitelist_health.yaml} 4
Failure Reason:

Command failed (workunit test kernel_untar_build.sh) on smithi049 with status 2: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=2a788890d42cb111c71cfb747fd571f65c72b7f6 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/kernel_untar_build.sh'

fail 4711498 2020-01-27 15:35:50 2020-01-28 18:48:16 2020-01-28 22:52:21 4:04:05 2:20:40 1:43:25 smithi master powercycle/osd/{clusters/3osd-1per-target.yaml objectstore/bluestore-comp.yaml powercycle/default.yaml tasks/radosbench.yaml thrashosds-health.yaml whitelist_health.yaml} 4
Failure Reason:

reached maximum tries (500) after waiting for 3000 seconds

fail 4711499 2020-01-27 15:35:51 2020-01-28 18:49:29 2020-01-28 20:13:30 1:24:01 0:21:47 1:02:14 smithi master powercycle/osd/{clusters/3osd-1per-target.yaml objectstore/bluestore-stupid.yaml powercycle/default.yaml tasks/cfuse_workunit_kernel_untar_build.yaml thrashosds-health.yaml whitelist_health.yaml} 4
Failure Reason:

Command failed (workunit test kernel_untar_build.sh) on smithi162 with status 2: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=2a788890d42cb111c71cfb747fd571f65c72b7f6 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/kernel_untar_build.sh'

fail 4711500 2020-01-27 15:35:52 2020-01-28 18:49:29 2020-01-28 23:07:37 4:18:08 2:48:12 1:29:56 smithi master powercycle/osd/{clusters/3osd-1per-target.yaml objectstore/filestore-xfs.yaml powercycle/default.yaml tasks/radosbench.yaml thrashosds-health.yaml whitelist_health.yaml} 4
Failure Reason:

reached maximum tries (500) after waiting for 3000 seconds