User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|
aemerson | 2016-04-26 16:59:57 | 2016-04-26 17:01:52 | 2016-04-27 05:04:23 | 12:02:31 | rados | wip-decontextualize-bisect | vps | — | 9 | 1 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
dead | 150535 | 2016-04-26 17:00:09 | 2016-04-26 17:01:52 | 2016-04-27 05:04:23 | 12:02:31 | vps | master | rados/upgrade/{rados.yaml hammer-x-singleton/{0-cluster/{openstack.yaml start.yaml} 1-hammer-install/hammer.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml} 6-next-mon/monb.yaml 7-workload/{radosbench.yaml rbd_api.yaml} 8-next-mon/monc.yaml 9-workload/{ec-rados-plugin=jerasure-k=3-m=1.yaml rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml test_cache-pool-snaps.yaml}}} | 3 | |||||
fail | 150536 | 2016-04-26 17:00:10 | 2016-04-26 17:01:20 | 2016-04-26 17:33:21 | 0:32:01 | 0:29:25 | 0:02:36 | vps | master | rados/thrash/{hobj-sort.yaml rados.yaml 0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml clusters/{fixed-2.yaml openstack.yaml} fs/ext4.yaml msgr/random.yaml msgr-failures/few.yaml thrashers/pggrow.yaml workloads/pool-snaps-few-objects.yaml} | 2 | |||
Failure Reason:
Command failed on vpm140 with status 1: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph health' |
||||||||||||||
fail | 150537 | 2016-04-26 17:00:11 | 2016-04-26 17:03:57 | 2016-04-26 17:25:57 | 0:22:00 | 0:18:28 | 0:03:32 | vps | master | rados/thrash/{hobj-sort.yaml rados.yaml 0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml clusters/{fixed-2.yaml openstack.yaml} fs/ext4.yaml msgr/random.yaml msgr-failures/few.yaml thrashers/mapgap.yaml workloads/snaps-few-objects.yaml} | 2 | |||
Failure Reason:
Command crashed: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 4000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op read 100 --op copy_from 50 --op write 50 --op write_excl 50 --op delete 50 --pool unique_pool_0' |
||||||||||||||
fail | 150538 | 2016-04-26 17:00:12 | 2016-04-26 17:06:02 | 2016-04-26 17:38:02 | 0:32:00 | 0:29:49 | 0:02:11 | vps | master | rados/thrash/{hobj-sort.yaml rados.yaml 0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml clusters/{fixed-2.yaml openstack.yaml} fs/ext4.yaml msgr/random.yaml msgr-failures/few.yaml thrashers/morepggrow.yaml workloads/pool-snaps-few-objects.yaml} | 2 | |||
Failure Reason:
Command failed on vpm033 with status 1: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph osd dump --format=json' |
||||||||||||||
fail | 150539 | 2016-04-26 17:00:13 | 2016-04-26 17:10:04 | 2016-04-26 17:34:04 | 0:24:00 | 0:21:27 | 0:02:33 | vps | master | rados/thrash/{hobj-sort.yaml rados.yaml 0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml clusters/{fixed-2.yaml openstack.yaml} fs/ext4.yaml msgr/random.yaml msgr-failures/few.yaml thrashers/default.yaml workloads/snaps-few-objects.yaml} | 2 | |||
Failure Reason:
Command crashed: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 4000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op read 100 --op copy_from 50 --op write 50 --op write_excl 50 --op delete 50 --pool unique_pool_0' |
||||||||||||||
fail | 150540 | 2016-04-26 17:00:14 | 2016-04-26 17:09:50 | 2016-04-26 17:41:51 | 0:32:01 | 0:29:13 | 0:02:48 | vps | master | rados/thrash/{hobj-sort.yaml rados.yaml 0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml clusters/{fixed-2.yaml openstack.yaml} fs/ext4.yaml msgr/random.yaml msgr-failures/few.yaml thrashers/mapgap.yaml workloads/pool-snaps-few-objects.yaml} | 2 | |||
Failure Reason:
Command failed on vpm070 with status 1: 'sudo ceph osd crush tunables default' |
||||||||||||||
fail | 150541 | 2016-04-26 17:00:15 | 2016-04-26 17:05:41 | 2016-04-26 17:43:41 | 0:38:00 | 0:36:40 | 0:01:20 | vps | master | rados/thrash/{hobj-sort.yaml rados.yaml 0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml clusters/{fixed-2.yaml openstack.yaml} fs/ext4.yaml msgr/random.yaml msgr-failures/few.yaml thrashers/pggrow.yaml workloads/snaps-few-objects.yaml} | 2 | |||
Failure Reason:
Command failed on vpm114 with status 11: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph osd deep-scrub osd.5' |
||||||||||||||
fail | 150542 | 2016-04-26 17:00:15 | 2016-04-26 17:08:45 | 2016-04-26 17:38:46 | 0:30:01 | 0:27:30 | 0:02:31 | vps | master | rados/thrash/{hobj-sort.yaml rados.yaml 0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml clusters/{fixed-2.yaml openstack.yaml} fs/ext4.yaml msgr/random.yaml msgr-failures/few.yaml thrashers/default.yaml workloads/pool-snaps-few-objects.yaml} | 2 | |||
Failure Reason:
Command failed on vpm001 with status 1: 'sudo ceph osd crush tunables default' |
||||||||||||||
fail | 150543 | 2016-04-26 17:00:16 | 2016-04-26 17:08:57 | 2016-04-26 17:42:58 | 0:34:01 | 0:31:35 | 0:02:26 | vps | master | rados/thrash/{hobj-sort.yaml rados.yaml 0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml clusters/{fixed-2.yaml openstack.yaml} fs/ext4.yaml msgr/random.yaml msgr-failures/few.yaml thrashers/morepggrow.yaml workloads/snaps-few-objects.yaml} | 2 | |||
Failure Reason:
Command failed on vpm048 with status 1: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph health' |
||||||||||||||
fail | 150544 | 2016-04-26 17:00:17 | 2016-04-26 17:10:40 | 2016-04-26 17:44:41 | 0:34:01 | 0:30:45 | 0:03:16 | vps | master | rados/thrash/{hobj-sort.yaml rados.yaml 0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml clusters/{fixed-2.yaml openstack.yaml} fs/ext4.yaml msgr/random.yaml msgr-failures/fastclose.yaml thrashers/pggrow.yaml workloads/pool-snaps-few-objects.yaml} | 2 | |||
Failure Reason:
Command failed on vpm143 with status 1: 'sudo ceph osd crush tunables default' |