User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
yuriw | 2022-04-27 14:24:25 | 2022-04-27 18:10:47 | 2022-04-28 02:01:30 | 7:50:43 | upgrade:octopus-x | pacific | smithi | 4fa079b | 2 | 23 | 4 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 6808910 | 2022-04-27 14:24:36 | 2022-04-27 18:10:47 | 2022-04-27 18:44:22 | 0:33:35 | 0:27:01 | 0:06:34 | smithi | master | centos | 8.stream | upgrade:octopus-x/parallel-no-cephadm/{0-cluster/{openstack start} 1-ceph-install/octopus 1.1-pg-log-overrides/normal_pg_log 2-workload/{rgw_ragweed_prepare} 3-upgrade-sequence/upgrade-mon-osd-mds 4-pacific 5-final-workload/{rgw rgw_ragweed_check} centos_latest} | 4 | |
Failure Reason:
Command failed (ragweed tests against rgw) on smithi157 with status 1: "RAGWEED_CONF=/home/ubuntu/cephtest/archive/ragweed.client.1.conf RAGWEED_STAGES=check BOTO_CONFIG=/home/ubuntu/cephtest/boto.cfg /home/ubuntu/cephtest/ragweed/virtualenv/bin/python -m nose -w /home/ubuntu/cephtest/ragweed -v -a '!fails_on_rgw'" |
||||||||||||||
dead | 6808911 | 2022-04-27 14:24:37 | 2022-04-27 18:10:47 | 2022-04-28 00:53:31 | 6:42:44 | smithi | master | ubuntu | 18.04 | upgrade:octopus-x/stress-split-erasure-code-no-cephadm/{0-cluster/{openstack start} 1-octopus-install/octopus 1.1-pg-log-overrides/short_pg_log 2-partial-upgrade/firsthalf 3-thrash/default 3.1-objectstore/bluestore-hybrid 4-ec-workload/{rados-ec-workload rbd-ec-workload} 5-finish-upgrade 6-pacific 7-final-workload mon_election/connectivity thrashosds-health ubuntu_18.04} | 5 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6808912 | 2022-04-27 14:24:38 | 2022-04-27 18:13:58 | 2022-04-27 20:57:08 | 2:43:10 | 2:33:52 | 0:09:18 | smithi | master | ubuntu | 18.04 | upgrade:octopus-x/stress-split-no-cephadm/{0-cluster/{openstack start} 1-ceph-install/octopus 1.1-pg-log-overrides/short_pg_log 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{radosbench rbd-cls rbd-import-export rbd_api readwrite rgw_ragweed_prepare snaps-few-objects} 5-finish-upgrade 6-pacific 8-final-workload/{rbd-python snaps-many-objects} mon_election/classic objectstore/filestore-xfs thrashosds-health ubuntu_18.04} | 5 | |
fail | 6808913 | 2022-04-27 14:24:39 | 2022-04-27 18:15:39 | 2022-04-27 19:36:29 | 1:20:50 | 1:15:31 | 0:05:19 | smithi | master | rhel | 8.4 | upgrade:octopus-x/stress-split/{0-distro/rhel_8.4_container_tools_rhel8 0-roles 1-start 2-first-half-tasks/rbd-import-export 3-stress-tasks/{radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 4-second-half-tasks/rbd-import-export mon_election/classic} | 2 | |
Failure Reason:
Command failed (workunit test cls/test_cls_rbd.sh) on smithi006 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=octopus TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_rbd.sh' |
||||||||||||||
fail | 6808914 | 2022-04-27 14:24:40 | 2022-04-27 18:15:39 | 2022-04-27 18:56:19 | 0:40:40 | 0:31:56 | 0:08:44 | smithi | master | ubuntu | 18.04 | upgrade:octopus-x/parallel/{0-distro/ubuntu_18.04 0-start 1-tasks mon_election/classic upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api}} | 2 | |
Failure Reason:
Command failed (workunit test cls/test_cls_journal.sh) on smithi155 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=octopus TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_journal.sh' |
||||||||||||||
fail | 6808915 | 2022-04-27 14:24:41 | 2022-04-27 18:16:30 | 2022-04-27 19:02:19 | 0:45:49 | 0:34:51 | 0:10:58 | smithi | master | ubuntu | 20.04 | upgrade:octopus-x/parallel/{0-distro/ubuntu_20.04 0-start 1-tasks mon_election/connectivity upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api}} | 2 | |
Failure Reason:
Command failed (workunit test cls/test_cls_journal.sh) on smithi135 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=octopus TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_journal.sh' |
||||||||||||||
fail | 6808916 | 2022-04-27 14:24:42 | 2022-04-27 18:16:40 | 2022-04-27 18:48:05 | 0:31:25 | 0:24:51 | 0:06:34 | smithi | master | ubuntu | 18.04 | upgrade:octopus-x/stress-split/{0-distro/ubuntu_18.04 0-roles 1-start 2-first-half-tasks/rbd_api 3-stress-tasks/{radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 4-second-half-tasks/radosbench mon_election/connectivity} | 2 | |
Failure Reason:
Command failed (workunit test rbd/test_librbd.sh) on smithi019 with status 139: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=octopus TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/test_librbd.sh' |
||||||||||||||
fail | 6808917 | 2022-04-27 14:24:43 | 2022-04-27 18:16:41 | 2022-04-27 18:54:41 | 0:38:00 | 0:29:41 | 0:08:19 | smithi | master | centos | 8.stream | upgrade:octopus-x/parallel/{0-distro/centos_8.stream_container_tools 0-start 1-tasks mon_election/classic upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api}} | 2 | |
Failure Reason:
Command failed (workunit test cls/test_cls_journal.sh) on smithi187 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=octopus TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_journal.sh' |
||||||||||||||
fail | 6808918 | 2022-04-27 14:24:44 | 2022-04-27 18:17:01 | 2022-04-27 18:49:11 | 0:32:10 | 0:21:45 | 0:10:25 | smithi | master | centos | 8.stream | upgrade:octopus-x/parallel-no-cephadm/{0-cluster/{openstack start} 1-ceph-install/octopus 1.1-pg-log-overrides/normal_pg_log 2-workload/{rgw_ragweed_prepare} 3-upgrade-sequence/upgrade-all 4-pacific 5-final-workload/{rgw rgw_ragweed_check} centos_latest} | 4 | |
Failure Reason:
Command failed (ragweed tests against rgw) on smithi138 with status 1: "RAGWEED_CONF=/home/ubuntu/cephtest/archive/ragweed.client.1.conf RAGWEED_STAGES=check BOTO_CONFIG=/home/ubuntu/cephtest/boto.cfg /home/ubuntu/cephtest/ragweed/virtualenv/bin/python -m nose -w /home/ubuntu/cephtest/ragweed -v -a '!fails_on_rgw'" |
||||||||||||||
dead | 6808919 | 2022-04-27 14:24:45 | 2022-04-27 18:20:12 | 2022-04-28 01:03:16 | 6:43:04 | smithi | master | ubuntu | 18.04 | upgrade:octopus-x/stress-split-erasure-code-no-cephadm/{0-cluster/{openstack start} 1-octopus-install/octopus 1.1-pg-log-overrides/short_pg_log 2-partial-upgrade/firsthalf 3-thrash/default 3.1-objectstore/bluestore-hybrid 4-ec-workload/{rados-ec-workload rbd-ec-workload} 5-finish-upgrade 6-pacific 7-final-workload mon_election/classic thrashosds-health ubuntu_18.04} | 5 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 6808920 | 2022-04-27 14:24:46 | 2022-04-27 18:22:13 | 2022-04-27 20:07:42 | 1:45:29 | 1:32:18 | 0:13:11 | smithi | master | ubuntu | 18.04 | upgrade:octopus-x/stress-split-no-cephadm/{0-cluster/{openstack start} 1-ceph-install/octopus 1.1-pg-log-overrides/short_pg_log 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{radosbench rbd-cls rbd-import-export rbd_api readwrite rgw_ragweed_prepare snaps-few-objects} 5-finish-upgrade 6-pacific 8-final-workload/{rbd-python snaps-many-objects} mon_election/classic objectstore/bluestore-hybrid thrashosds-health ubuntu_18.04} | 5 | |
Failure Reason:
Command failed on smithi053 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd dump --format=json' |
||||||||||||||
fail | 6808921 | 2022-04-27 14:24:47 | 2022-04-27 18:25:34 | 2022-04-27 20:05:10 | 1:39:36 | 1:22:59 | 0:16:37 | smithi | master | ubuntu | 20.04 | upgrade:octopus-x/stress-split/{0-distro/ubuntu_20.04 0-roles 1-start 2-first-half-tasks/readwrite 3-stress-tasks/{radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 4-second-half-tasks/rbd-import-export mon_election/classic} | 2 | |
Failure Reason:
Command failed (workunit test cls/test_cls_rbd.sh) on smithi067 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=octopus TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_rbd.sh' |
||||||||||||||
fail | 6808922 | 2022-04-27 14:24:48 | 2022-04-27 18:30:55 | 2022-04-27 19:21:35 | 0:50:40 | 0:45:04 | 0:05:36 | smithi | master | rhel | 8.4 | upgrade:octopus-x/parallel/{0-distro/rhel_8.4_container_tools_3.0 0-start 1-tasks mon_election/connectivity upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api}} | 2 | |
Failure Reason:
Command failed (workunit test cls/test_cls_journal.sh) on smithi040 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=octopus TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_journal.sh' |
||||||||||||||
fail | 6808923 | 2022-04-27 14:24:49 | 2022-04-27 18:30:55 | 2022-04-27 19:18:40 | 0:47:45 | 0:40:38 | 0:07:07 | smithi | master | rhel | 8.4 | upgrade:octopus-x/parallel/{0-distro/rhel_8.4_container_tools_rhel8 0-start 1-tasks mon_election/classic upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api}} | 2 | |
Failure Reason:
Command failed (workunit test cls/test_cls_journal.sh) on smithi133 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=octopus TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_journal.sh' |
||||||||||||||
fail | 6808924 | 2022-04-27 14:24:50 | 2022-04-27 18:31:35 | 2022-04-27 19:55:34 | 1:23:59 | 1:15:27 | 0:08:32 | smithi | master | centos | 8.stream | upgrade:octopus-x/stress-split/{0-distro/centos_8.stream_container_tools 0-roles 1-start 2-first-half-tasks/snaps-few-objects 3-stress-tasks/{radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 4-second-half-tasks/radosbench mon_election/connectivity} | 2 | |
Failure Reason:
Command failed (workunit test cls/test_cls_rbd.sh) on smithi074 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=octopus TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_rbd.sh' |
||||||||||||||
fail | 6808925 | 2022-04-27 14:24:51 | 2022-04-27 18:32:36 | 2022-04-27 19:19:52 | 0:47:16 | 0:27:22 | 0:19:54 | smithi | master | centos | 8.stream | upgrade:octopus-x/parallel-no-cephadm/{0-cluster/{openstack start} 1-ceph-install/octopus 1.1-pg-log-overrides/short_pg_log 2-workload/{rgw_ragweed_prepare} 3-upgrade-sequence/upgrade-mon-osd-mds 4-pacific 5-final-workload/{rgw rgw_ragweed_check} centos_latest} | 4 | |
Failure Reason:
Command failed (ragweed tests against rgw) on smithi157 with status 1: "RAGWEED_CONF=/home/ubuntu/cephtest/archive/ragweed.client.1.conf RAGWEED_STAGES=check BOTO_CONFIG=/home/ubuntu/cephtest/boto.cfg /home/ubuntu/cephtest/ragweed/virtualenv/bin/python -m nose -w /home/ubuntu/cephtest/ragweed -v -a '!fails_on_rgw'" |
||||||||||||||
dead | 6808926 | 2022-04-27 14:24:52 | 2022-04-27 18:44:28 | 2022-04-28 01:31:31 | 6:47:03 | smithi | master | ubuntu | 18.04 | upgrade:octopus-x/stress-split-erasure-code-no-cephadm/{0-cluster/{openstack start} 1-octopus-install/octopus 1.1-pg-log-overrides/normal_pg_log 2-partial-upgrade/firsthalf 3-thrash/default 3.1-objectstore/filestore-xfs 4-ec-workload/{rados-ec-workload rbd-ec-workload} 5-finish-upgrade 6-pacific 7-final-workload mon_election/connectivity thrashosds-health ubuntu_18.04} | 5 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6808927 | 2022-04-27 14:24:53 | 2022-04-27 18:49:20 | 2022-04-27 21:56:14 | 3:06:54 | 2:54:29 | 0:12:25 | smithi | master | ubuntu | 18.04 | upgrade:octopus-x/stress-split-no-cephadm/{0-cluster/{openstack start} 1-ceph-install/octopus 1.1-pg-log-overrides/normal_pg_log 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{radosbench rbd-cls rbd-import-export rbd_api readwrite rgw_ragweed_prepare snaps-few-objects} 5-finish-upgrade 6-pacific 8-final-workload/{rbd-python snaps-many-objects} mon_election/connectivity objectstore/filestore-xfs thrashosds-health ubuntu_18.04} | 5 | |
fail | 6808928 | 2022-04-27 14:24:54 | 2022-04-27 18:54:51 | 2022-04-27 19:35:54 | 0:41:03 | 0:33:21 | 0:07:42 | smithi | master | ubuntu | 18.04 | upgrade:octopus-x/parallel/{0-distro/ubuntu_18.04 0-start 1-tasks mon_election/connectivity upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api}} | 2 | |
Failure Reason:
Command failed (workunit test cls/test_cls_journal.sh) on smithi155 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=octopus TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_journal.sh' |
||||||||||||||
fail | 6808929 | 2022-04-27 14:24:55 | 2022-04-27 18:56:22 | 2022-04-27 20:43:31 | 1:47:09 | 1:34:18 | 0:12:51 | smithi | master | rhel | 8.4 | upgrade:octopus-x/stress-split/{0-distro/rhel_8.4_container_tools_3.0 0-roles 1-start 2-first-half-tasks/radosbench 3-stress-tasks/{radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 4-second-half-tasks/rbd-import-export mon_election/classic} | 2 | |
Failure Reason:
Command failed (workunit test cls/test_cls_rbd.sh) on smithi135 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=octopus TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_rbd.sh' |
||||||||||||||
fail | 6808930 | 2022-04-27 14:24:56 | 2022-04-27 19:02:23 | 2022-04-27 19:49:37 | 0:47:14 | 0:35:06 | 0:12:08 | smithi | master | ubuntu | 20.04 | upgrade:octopus-x/parallel/{0-distro/ubuntu_20.04 0-start 1-tasks mon_election/classic upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api}} | 2 | |
Failure Reason:
Command failed (workunit test cls/test_cls_journal.sh) on smithi016 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=octopus TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_journal.sh' |
||||||||||||||
fail | 6808931 | 2022-04-27 14:24:58 | 2022-04-27 19:03:54 | 2022-04-27 20:23:25 | 1:19:31 | 1:13:12 | 0:06:19 | smithi | master | rhel | 8.4 | upgrade:octopus-x/stress-split/{0-distro/rhel_8.4_container_tools_rhel8 0-roles 1-start 2-first-half-tasks/rbd-cls 3-stress-tasks/{radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 4-second-half-tasks/radosbench mon_election/connectivity} | 2 | |
Failure Reason:
Command failed (workunit test cls/test_cls_rbd.sh) on smithi100 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=octopus TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_rbd.sh' |
||||||||||||||
fail | 6808932 | 2022-04-27 14:24:59 | 2022-04-27 19:03:54 | 2022-04-27 19:40:45 | 0:36:51 | 0:29:51 | 0:07:00 | smithi | master | centos | 8.stream | upgrade:octopus-x/parallel/{0-distro/centos_8.stream_container_tools 0-start 1-tasks mon_election/connectivity upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api}} | 2 | |
Failure Reason:
Command failed (workunit test cls/test_cls_journal.sh) on smithi022 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=octopus TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_journal.sh' |
||||||||||||||
fail | 6808933 | 2022-04-27 14:25:00 | 2022-04-27 19:03:55 | 2022-04-27 19:31:52 | 0:27:57 | 0:21:15 | 0:06:42 | smithi | master | centos | 8.stream | upgrade:octopus-x/parallel-no-cephadm/{0-cluster/{openstack start} 1-ceph-install/octopus 1.1-pg-log-overrides/short_pg_log 2-workload/{rgw_ragweed_prepare} 3-upgrade-sequence/upgrade-all 4-pacific 5-final-workload/{rgw rgw_ragweed_check} centos_latest} | 4 | |
Failure Reason:
Command failed (ragweed tests against rgw) on smithi131 with status 1: "RAGWEED_CONF=/home/ubuntu/cephtest/archive/ragweed.client.1.conf RAGWEED_STAGES=check BOTO_CONFIG=/home/ubuntu/cephtest/boto.cfg /home/ubuntu/cephtest/ragweed/virtualenv/bin/python -m nose -w /home/ubuntu/cephtest/ragweed -v -a '!fails_on_rgw'" |
||||||||||||||
dead | 6808934 | 2022-04-27 14:25:01 | 2022-04-27 19:04:45 | 2022-04-28 02:01:30 | 6:56:45 | smithi | master | ubuntu | 18.04 | upgrade:octopus-x/stress-split-erasure-code-no-cephadm/{0-cluster/{openstack start} 1-octopus-install/octopus 1.1-pg-log-overrides/short_pg_log 2-partial-upgrade/firsthalf 3-thrash/default 3.1-objectstore/filestore-xfs 4-ec-workload/{rados-ec-workload rbd-ec-workload} 5-finish-upgrade 6-pacific 7-final-workload mon_election/classic thrashosds-health ubuntu_18.04} | 5 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 6808935 | 2022-04-27 14:25:02 | 2022-04-27 19:19:58 | 2022-04-27 21:03:05 | 1:43:07 | 1:34:10 | 0:08:57 | smithi | master | ubuntu | 18.04 | upgrade:octopus-x/stress-split-no-cephadm/{0-cluster/{openstack start} 1-ceph-install/octopus 1.1-pg-log-overrides/short_pg_log 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{radosbench rbd-cls rbd-import-export rbd_api readwrite rgw_ragweed_prepare snaps-few-objects} 5-finish-upgrade 6-pacific 8-final-workload/{rbd-python snaps-many-objects} mon_election/connectivity objectstore/bluestore-hybrid thrashosds-health ubuntu_18.04} | 5 | |
Failure Reason:
Command failed on smithi040 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd dump --format=json' |
||||||||||||||
fail | 6808936 | 2022-04-27 14:25:03 | 2022-04-27 19:21:38 | 2022-04-27 20:13:03 | 0:51:25 | 0:42:29 | 0:08:56 | smithi | master | rhel | 8.4 | upgrade:octopus-x/parallel/{0-distro/rhel_8.4_container_tools_3.0 0-start 1-tasks mon_election/classic upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api}} | 2 | |
Failure Reason:
Command failed (workunit test cls/test_cls_journal.sh) on smithi122 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=octopus TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_journal.sh' |
||||||||||||||
fail | 6808937 | 2022-04-27 14:25:04 | 2022-04-27 19:23:59 | 2022-04-27 20:45:10 | 1:21:11 | 1:15:07 | 0:06:04 | smithi | master | ubuntu | 18.04 | upgrade:octopus-x/stress-split/{0-distro/ubuntu_18.04 0-roles 1-start 2-first-half-tasks/rbd-import-export 3-stress-tasks/{radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 4-second-half-tasks/rbd-import-export mon_election/classic} | 2 | |
Failure Reason:
Command failed (workunit test cls/test_cls_rbd.sh) on smithi005 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=octopus TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_rbd.sh' |
||||||||||||||
fail | 6808938 | 2022-04-27 14:25:05 | 2022-04-27 19:23:59 | 2022-04-27 20:15:08 | 0:51:09 | 0:41:21 | 0:09:48 | smithi | master | rhel | 8.4 | upgrade:octopus-x/parallel/{0-distro/rhel_8.4_container_tools_rhel8 0-start 1-tasks mon_election/connectivity upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api}} | 2 | |
Failure Reason:
Command failed (workunit test cls/test_cls_journal.sh) on smithi003 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=octopus TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_journal.sh' |