User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
ifed01 | 2021-12-27 13:37:54 | 2021-12-27 19:21:51 | 2021-12-28 07:31:20 | 12:09:29 | rados | wip-ifed-daemonhistogram | smithi | 2f54fdc | 8 | 3 | 1 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
pass | 6586600 | 2021-12-27 13:39:02 | 2021-12-27 19:21:50 | 2021-12-27 20:03:34 | 0:41:44 | 0:32:17 | 0:09:27 | smithi | master | centos | 8.3 | rados/singleton-bluestore/{all/cephtool mon_election/connectivity msgr-failures/many msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8}} | 1 | |
fail | 6586601 | 2021-12-27 13:39:03 | 2021-12-27 19:21:50 | 2021-12-27 19:47:32 | 0:25:42 | 0:18:03 | 0:07:39 | smithi | master | centos | 8.stream | rados/dashboard/{0-single-container-host debug/mgr mon_election/connectivity random-objectstore$/{bluestore-stupid} tasks/e2e} | 2 | |
Failure Reason:
Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi120 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=2f54fdc3a011357d84b2af56c8177a08c3f3bd93 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh' |
||||||||||||||
dead | 6586602 | 2021-12-27 13:39:04 | 2021-12-27 19:21:51 | 2021-12-28 07:31:20 | 12:09:29 | smithi | master | centos | 8.stream | rados/cephadm/mgr-nfs-upgrade/{0-centos_8.stream_container_tools 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6586603 | 2021-12-27 13:39:05 | 2021-12-27 19:22:01 | 2021-12-27 20:00:35 | 0:38:34 | 0:28:08 | 0:10:26 | smithi | master | rados/cephadm/workunits/{agent/on mon_election/classic task/test_nfs} | 1 | |||
pass | 6586604 | 2021-12-27 13:39:06 | 2021-12-27 19:22:21 | 2021-12-27 20:03:51 | 0:41:30 | 0:32:28 | 0:09:02 | smithi | master | centos | 8.3 | rados/singleton-bluestore/{all/cephtool mon_election/classic msgr-failures/many msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{centos_8}} | 1 | |
pass | 6586605 | 2021-12-27 13:39:07 | 2021-12-27 19:22:22 | 2021-12-27 19:59:50 | 0:37:28 | 0:29:57 | 0:07:31 | smithi | master | centos | 8.stream | rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{centos_8.stream} thrashers/many workloads/rados_mon_workunits} | 2 | |
pass | 6586606 | 2021-12-27 13:39:08 | 2021-12-27 19:22:22 | 2021-12-27 20:02:55 | 0:40:33 | 0:29:24 | 0:11:09 | smithi | master | ubuntu | 20.04 | rados/singleton-bluestore/{all/cephtool mon_election/connectivity msgr-failures/none msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{ubuntu_latest}} | 1 | |
fail | 6586607 | 2021-12-27 13:39:09 | 2021-12-27 19:22:42 | 2021-12-27 19:46:45 | 0:24:03 | 0:16:21 | 0:07:42 | smithi | master | centos | 8.stream | rados/dashboard/{0-single-container-host debug/mgr mon_election/classic random-objectstore$/{bluestore-stupid} tasks/e2e} | 2 | |
Failure Reason:
Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi035 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=2f54fdc3a011357d84b2af56c8177a08c3f3bd93 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh' |
||||||||||||||
fail | 6586608 | 2021-12-27 13:39:10 | 2021-12-27 19:23:02 | 2021-12-27 19:51:15 | 0:28:13 | 0:16:46 | 0:11:27 | smithi | master | rados/cephadm/workunits/{agent/on mon_election/connectivity task/test_nfs} | 1 | |||
Failure Reason:
timeout expired in wait_until_healthy |
||||||||||||||
pass | 6586609 | 2021-12-27 13:39:11 | 2021-12-27 19:23:13 | 2021-12-27 20:02:18 | 0:39:05 | 0:32:53 | 0:06:12 | smithi | master | centos | 8.stream | rados/cephadm/mgr-nfs-upgrade/{0-centos_8.stream_container_tools 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |
pass | 6586610 | 2021-12-27 13:39:12 | 2021-12-27 19:23:33 | 2021-12-27 20:14:27 | 0:50:54 | 0:43:04 | 0:07:50 | smithi | master | centos | 8.stream | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-comp-lz4 rados recovery-overrides/{default} supported-random-distro$/{centos_8.stream} thrashers/default thrashosds-health workloads/ec-radosbench} | 2 | |
pass | 6586611 | 2021-12-27 13:39:14 | 2021-12-27 19:24:14 | 2021-12-27 20:04:12 | 0:39:58 | 0:28:49 | 0:11:09 | smithi | master | ubuntu | 20.04 | rados/singleton-bluestore/{all/cephtool mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest}} | 1 |