Name | Machine Type | Up | Locked | Locked Since | Locked By | OS Type | OS Version | Arch | Description |
---|---|---|---|---|---|---|---|---|---|
smithi042.front.sepia.ceph.com | smithi | True | True | 2021-01-16 20:15:48.988030 | scheduled_nojha@teuthology | centos | 8 | x86_64 | /home/teuthworker/archive/nojha-2021-01-16_17:36:29-rados-master-distro-basic-smithi/5791959 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
running | 5791959 |
![]() |
2021-01-16 17:38:53 | 2021-01-16 20:10:47 | 2021-01-16 20:38:46 | 0:28:52 | smithi | master | centos | 8.2 | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zstd rados tasks/mon_recovery validater/valgrind} | 2 | ||
pass | 5791941 |
![]() |
2021-01-16 17:38:39 | 2021-01-16 19:58:46 | 2021-01-16 20:16:45 | 0:17:59 | 0:09:03 | 0:08:56 | smithi | master | centos | 8.2 | rados/singleton-nomsgr/{all/large-omap-object-warnings mon_election/classic rados supported-random-distro$/{centos_8}} | 1 |
fail | 5791917 |
![]() ![]() |
2021-01-16 17:38:20 | 2021-01-16 19:32:18 | 2021-01-16 20:00:17 | 0:27:59 | 0:07:10 | 0:20:49 | smithi | master | ubuntu | 18.04 | rados/cephadm/workunits/{distro/ubuntu_18.04_podman mon_election/connectivity task/test_adoption} | 1 |
Failure Reason:
Command failed (workunit test cephadm/test_adoption.sh) on smithi042 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=9280858e9fc1bfb270256febe37c17d78ff0e138 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_adoption.sh' |
||||||||||||||
pass | 5791800 |
![]() |
2021-01-16 11:16:10 | 2021-01-16 12:58:05 | 2021-01-16 14:06:05 | 1:08:00 | 0:23:21 | 0:44:39 | smithi | master | rhel | 8.3 | powercycle/osd/{clusters/3osd-1per-target objectstore/bluestore-low-osd-mem-target powercycle/default supported-all-distro/rhel_8 tasks/cfuse_workunit_suites_fsstress thrashosds-health whitelist_health} | 4 |
fail | 5791791 |
![]() ![]() |
2021-01-16 11:16:03 | 2021-01-16 12:55:27 | 2021-01-16 13:35:27 | 0:40:00 | 0:33:54 | 0:06:06 | smithi | master | rhel | 8.3 | powercycle/osd/{clusters/3osd-1per-target objectstore/bluestore-low-osd-mem-target powercycle/default supported-all-distro/rhel_8 tasks/rados_api_tests thrashosds-health whitelist_health} | 4 |
Failure Reason:
Command failed (workunit test rados/test.sh) on smithi100 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=9280858e9fc1bfb270256febe37c17d78ff0e138 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh' |
||||||||||||||
pass | 5791769 |
![]() |
2021-01-16 08:00:49 | 2021-01-16 09:11:57 | 2021-01-16 09:37:57 | 0:26:00 | 0:20:13 | 0:05:47 | smithi | master | rhel | 8.2 | smoke/basic/{clusters/{fixed-3-cephfs openstack} objectstore/bluestore-bitmap supported-random-distro$/{rhel_latest} tasks/kclient_workunit_suites_pjd} | 3 |
pass | 5791464 |
![]() |
2021-01-16 07:03:58 | 2021-01-16 10:20:31 | 2021-01-16 11:12:31 | 0:52:00 | 0:38:34 | 0:13:26 | smithi | master | ubuntu | 18.04 | rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/luminous-v1only backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/few rados thrashers/default thrashosds-health workloads/snaps-few-objects} | 3 |
pass | 5791398 |
![]() |
2021-01-16 07:03:04 | 2021-01-16 09:37:10 | 2021-01-16 09:55:10 | 0:18:00 | 0:08:51 | 0:09:09 | smithi | master | centos | 8.2 | rados/singleton/{all/peer mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/filestore-xfs rados supported-random-distro$/{centos_8}} | 1 |
pass | 5791379 |
![]() |
2021-01-16 07:00:52 | 2021-01-16 08:54:01 | 2021-01-16 10:24:02 | 1:30:01 | 0:18:51 | 1:11:10 | smithi | master | smoke/basic/{clusters/{fixed-3-cephfs openstack} objectstore/bluestore-bitmap tasks/rados_cls_all} | 3 | ||
pass | 5790739 |
![]() |
2021-01-16 05:01:12 | 2021-01-16 08:27:56 | 2021-01-16 09:13:56 | 0:46:00 | 0:22:32 | 0:23:28 | smithi | master | centos | 8.2 | smoke/basic/{clusters/{fixed-3-cephfs openstack} objectstore/bluestore-bitmap supported-random-distro$/{centos_8} tasks/{0-install test/cfuse_workunit_suites_iozone}} | 3 |
pass | 5790679 |
![]() |
2021-01-16 04:01:05 | 2021-01-16 08:17:32 | 2021-01-16 08:41:32 | 0:24:00 | 0:15:47 | 0:08:13 | smithi | master | rhel | 8.3 | fs/workload/{begin clusters/1a5s-mds-1c-client-3node conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore-ec/bluestore-comp omap_limit/10 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/1 scrub/no tasks/{0-check-counter workunit/suites/fsx}} | 3 |
fail | 5790595 |
![]() |
2021-01-16 03:59:56 | 2021-01-16 07:15:24 | 2021-01-16 08:19:25 | 1:04:01 | 0:53:14 | 0:10:47 | smithi | master | centos | 8.2 | fs/workload/{begin clusters/1a5s-mds-1c-client-3node conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/testing/{flavor/centos_latest k-testing} ms-die-on-skipped}} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/yes tasks/{0-check-counter workunit/suites/blogbench}} | 3 |
Failure Reason:
"2021-01-16T07:38:53.109690+0000 mds.d (mds.0) 19 : cluster [WRN] Scrub error on inode 0x100000001fe (/client.0/tmp/blogbench-1.0/src) see mds.d log and `damage ls` output for details" in cluster log |
||||||||||||||
pass | 5790559 |
![]() |
2021-01-16 03:59:28 | 2021-01-16 06:47:25 | 2021-01-16 07:17:25 | 0:30:00 | 0:12:30 | 0:17:30 | smithi | master | ubuntu | 18.04 | fs/workload/{begin clusters/1a5s-mds-1c-client-3node conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount overrides/{distro/testing/{flavor/ubuntu_latest k-testing} ms-die-on-skipped}} objectstore-ec/bluestore-comp omap_limit/10000 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/1 scrub/yes tasks/{0-check-counter workunit/suites/fsync-tester}} | 3 |
pass | 5790261 |
![]() |
2021-01-16 01:13:12 | 2021-01-16 06:17:01 | 2021-01-16 06:55:01 | 0:38:00 | 0:28:14 | 0:09:46 | smithi | master | ubuntu | 18.04 | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/osd-delay objectstore/bluestore-comp-snappy rados recovery-overrides/{default} supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} | 3 |
pass | 5790226 |
![]() |
2021-01-16 01:12:42 | 2021-01-16 05:54:47 | 2021-01-16 06:16:47 | 0:22:00 | 0:13:40 | 0:08:20 | smithi | master | rhel | 8.3 | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/few objectstore/bluestore-comp-lz4 rados recovery-overrides/{more-async-recovery} supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=lrc-k=4-m=2-l=3} | 3 |
pass | 5790154 |
![]() |
2021-01-16 01:11:44 | 2021-01-16 05:14:32 | 2021-01-16 05:58:31 | 0:43:59 | 0:32:30 | 0:11:29 | smithi | master | ubuntu | 18.04 | rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/fastclose rados thrashers/pggrow thrashosds-health workloads/cache-snaps} | 3 |
pass | 5790119 |
![]() |
2021-01-16 01:11:16 | 2021-01-16 04:56:08 | 2021-01-16 05:14:08 | 0:18:00 | 0:11:54 | 0:06:06 | smithi | master | rhel | 8.3 | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/connectivity msgr-failures/few objectstore/filestore-xfs rados recovery-overrides/{more-active-recovery} supported-random-distro$/{rhel_8} thrashers/default thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 |
pass | 5790055 |
![]() |
2021-01-16 01:10:23 | 2021-01-16 04:24:04 | 2021-01-16 04:58:03 | 0:33:59 | 0:23:18 | 0:10:41 | smithi | master | ubuntu | 18.04 | rados/cephadm/smoke-roleless/{distro/ubuntu_18.04_podman start} | 2 |
dead | 5789114 |
![]() |
2021-01-15 15:58:48 | 2021-01-15 16:23:11 | 2021-01-16 04:25:36 | 12:02:25 | smithi | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{default} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/crush-compat mon_election/classic msgr-failures/osd-delay msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/pool-snaps-few-objects} | 2 | ||
pass | 5788963 |
![]() |
2021-01-15 10:44:30 | 2021-01-16 14:06:20 | 2021-01-16 15:00:21 | 0:54:01 | 0:44:33 | 0:09:28 | smithi | master | centos | 8.2 | rgw/verify/{centos_latest clusters/fixed-2 frontend/civetweb msgr-failures/few objectstore/bluestore-bitmap overrides proto/http rgw_pool_type/ec-profile sharding$/{default} striping$/{stripe-greater-than-chunk} tasks/{0-install cls ragweed reshard s3tests-java s3tests} validater/lockdep} | 2 |