User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
yuriw | 2021-08-28 14:21:29 | 2021-08-28 14:23:32 | 2021-08-29 02:41:51 | 12:18:19 | rados | wip-yuri-master-8.27.21 | smithi | c696e94 | 8 | 7 | 3 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 6364181 | 2021-08-28 14:22:53 | 2021-08-28 14:23:32 | 2021-08-28 16:55:51 | 2:32:19 | 2:01:13 | 0:31:06 | smithi | master | centos | 8.stream | rados/objectstore/{backends/objectstore-bluestore-a supported-random-distro$/{centos_8.stream}} | 1 | |
Failure Reason:
Command crashed: 'sudo TESTDIR=/home/ubuntu/cephtest bash -c \'mkdir $TESTDIR/archive/ostest && cd $TESTDIR/archive/ostest && ulimit -Sn 16384 && CEPH_ARGS="--no-log-to-stderr --log-file $TESTDIR/archive/ceph_test_objectstore.log --debug-bluestore 20" ceph_test_objectstore --gtest_filter=*/2:-*SyntheticMatrixC* --gtest_catch_exceptions=0\'' |
||||||||||||||
fail | 6364182 | 2021-08-28 14:22:54 | 2021-08-28 14:23:32 | 2021-08-28 14:59:09 | 0:35:37 | 0:25:12 | 0:10:25 | smithi | master | centos | 8.2 | rados/dashboard/{centos_8.2_container_tools_3.0 debug/mgr mon_election/classic random-objectstore$/{bluestore-comp-zstd} tasks/e2e} | 2 | |
Failure Reason:
Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi022 with status 4: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=c696e943c447f7f5785f651098cb7209f68915d7 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh' |
||||||||||||||
fail | 6364183 | 2021-08-28 14:22:55 | 2021-08-28 14:23:32 | 2021-08-28 14:53:12 | 0:29:40 | 0:17:38 | 0:12:02 | smithi | master | centos | 8.3 | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-low-osd-mem-target rados tasks/rados_cls_all validater/lockdep} | 2 | |
Failure Reason:
Command failed (workunit test cls/test_cls_rgw.sh) on smithi120 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=c696e943c447f7f5785f651098cb7209f68915d7 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_rgw.sh' |
||||||||||||||
pass | 6364184 | 2021-08-28 14:22:55 | 2021-08-28 14:23:32 | 2021-08-28 14:53:43 | 0:30:11 | 0:17:39 | 0:12:32 | smithi | master | centos | 8.3 | rados/cephadm/smoke-roleless/{0-distro/centos_8.3_container_tools_3.0 1-start 2-services/rgw-ingress 3-final} | 2 | |
fail | 6364185 | 2021-08-28 14:22:56 | 2021-08-28 14:23:33 | 2021-08-28 15:13:52 | 0:50:19 | 0:36:45 | 0:13:34 | smithi | master | centos | 8.3 | rados/upgrade/parallel/{0-distro$/{centos_8.3_container_tools_3.0} 0-start 1-tasks mon_election/classic upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} | 2 | |
Failure Reason:
Command failed (workunit test cls/test_cls_rgw.sh) on smithi090 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=pacific TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_rgw.sh' |
||||||||||||||
dead | 6364186 | 2021-08-28 14:22:57 | 2021-08-28 14:23:33 | 2021-08-29 02:35:03 | 12:11:30 | smithi | master | centos | 8.stream | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/few objectstore/bluestore-comp-lz4 rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{centos_8.stream} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} | 3 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
dead | 6364187 | 2021-08-28 14:22:58 | 2021-08-28 14:23:33 | 2021-08-29 02:35:04 | 12:11:31 | smithi | master | centos | 8.stream | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8.stream} thrashers/mapgap thrashosds-health workloads/snaps-few-objects-localized} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6364188 | 2021-08-28 14:22:59 | 2021-08-28 14:23:35 | 2021-08-28 15:09:40 | 0:46:05 | 0:35:53 | 0:10:12 | smithi | master | centos | 8.2 | rados/cephadm/thrash/{0-distro/centos_8.2_container_tools_3.0 1-start 2-thrash 3-tasks/snaps-few-objects fixed-2 msgr/async-v2only root} | 2 | |
fail | 6364189 | 2021-08-28 14:23:00 | 2021-08-28 14:23:35 | 2021-08-28 14:58:00 | 0:34:25 | 0:24:46 | 0:09:39 | smithi | master | centos | 8.2 | rados/dashboard/{centos_8.2_container_tools_3.0 debug/mgr mon_election/connectivity random-objectstore$/{bluestore-low-osd-mem-target} tasks/e2e} | 2 | |
Failure Reason:
Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi085 with status 4: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=c696e943c447f7f5785f651098cb7209f68915d7 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh' |
||||||||||||||
fail | 6364190 | 2021-08-28 14:23:01 | 2021-08-28 14:23:35 | 2021-08-28 23:20:44 | 8:57:09 | 8:41:52 | 0:15:17 | smithi | master | ubuntu | 20.04 | rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 1-rook 2-workload/none 3-final cluster/3-node k8s/1.21 net/calico rook/master} | 3 | |
Failure Reason:
'check osd count' reached maximum tries (90) after waiting for 900 seconds |
||||||||||||||
pass | 6364191 | 2021-08-28 14:23:02 | 2021-08-28 14:23:35 | 2021-08-28 15:10:58 | 0:47:23 | 0:31:53 | 0:15:30 | smithi | master | rhel | 8.4 | rados/singleton/{all/thrash_cache_writeback_proxy_none mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-bitmap rados supported-random-distro$/{rhel_8}} | 2 | |
dead | 6364192 | 2021-08-28 14:23:03 | 2021-08-28 14:32:07 | 2021-08-29 02:41:51 | 12:09:44 | smithi | master | ubuntu | 20.04 | rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 1-rook 2-workload/radosbench 3-final cluster/3-node k8s/1.21 net/calico rook/1.7.0} | 3 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 6364193 | 2021-08-28 14:23:04 | 2021-08-28 14:32:38 | 2021-08-28 19:17:39 | 4:45:01 | 4:33:25 | 0:11:36 | smithi | master | centos | 8.3 | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-zstd rados tasks/rados_cls_all validater/valgrind} | 2 | |
Failure Reason:
Command failed (workunit test cls/test_cls_rgw.sh) on smithi016 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=c696e943c447f7f5785f651098cb7209f68915d7 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_rgw.sh' |
||||||||||||||
pass | 6364194 | 2021-08-28 14:23:05 | 2021-08-28 14:34:19 | 2021-08-28 14:59:06 | 0:24:47 | 0:14:20 | 0:10:27 | smithi | master | centos | 8.3 | rados/mgr/{clusters/{2-node-mgr} debug/mgr mon_election/connectivity objectstore/bluestore-comp-lz4 supported-random-distro$/{centos_8} tasks/insights} | 2 | |
pass | 6364195 | 2021-08-28 14:23:06 | 2021-08-28 14:36:10 | 2021-08-28 15:15:41 | 0:39:31 | 0:27:50 | 0:11:41 | smithi | master | ubuntu | 20.04 | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/few objectstore/bluestore-hybrid rados recovery-overrides/{more-active-recovery} supported-random-distro$/{ubuntu_latest} thrashers/mapgap thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} | 3 | |
pass | 6364197 | 2021-08-28 14:23:07 | 2021-08-28 14:36:20 | 2021-08-28 14:53:25 | 0:17:05 | 0:07:43 | 0:09:22 | smithi | master | ubuntu | 20.04 | rados/singleton-nomsgr/{all/balancer mon_election/classic rados supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 6364199 | 2021-08-28 14:23:07 | 2021-08-28 14:36:21 | 2021-08-28 15:00:24 | 0:24:03 | 0:10:15 | 0:13:48 | smithi | master | ubuntu | 20.04 | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/connectivity msgr-failures/fastclose objectstore/bluestore-hybrid rados recovery-overrides/{more-active-recovery} supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 | |
pass | 6364201 | 2021-08-28 14:23:08 | 2021-08-28 14:36:42 | 2021-08-28 15:03:38 | 0:26:56 | 0:17:57 | 0:08:59 | smithi | master | rhel | 8.4 | rados/singleton-nomsgr/{all/ceph-post-file mon_election/connectivity rados supported-random-distro$/{rhel_8}} | 1 |