Name | Machine Type | Up | Locked | Locked Since | Locked By | OS Type | OS Version | Arch | Description |
---|---|---|---|---|---|---|---|---|---|
smithi071.front.sepia.ceph.com | smithi | True | True | 2023-01-12 12:07:23.150305 | guits@gabrioux-mac | rhel | 8.6 | x86_64 | None |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
pass | 7134669 |
![]() |
2023-01-11 21:37:25 | 2023-01-12 10:33:13 | 2023-01-12 11:42:03 | 1:08:50 | 1:03:05 | 0:05:45 | smithi | main | rhel | 8.6 | rados/singleton/{all/osd-recovery-incomplete mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{rhel_8}} | 1 |
pass | 7134641 |
![]() |
2023-01-11 21:36:51 | 2023-01-12 10:11:42 | 2023-01-12 10:32:50 | 0:21:08 | 0:13:42 | 0:07:26 | smithi | main | rhel | 8.6 | rados/singleton-nomsgr/{all/large-omap-object-warnings mon_election/classic rados supported-random-distro$/{rhel_8}} | 1 |
pass | 7134587 |
![]() |
2023-01-11 21:36:14 | 2023-01-12 09:31:04 | 2023-01-12 10:11:08 | 0:40:04 | 0:28:38 | 0:11:26 | smithi | main | ubuntu | 20.04 | rados/monthrash/{ceph clusters/9-mons mon_election/connectivity msgr-failures/few msgr/async-v1only objectstore/bluestore-stupid rados supported-random-distro$/{ubuntu_latest} thrashers/sync-many workloads/snaps-few-objects} | 2 |
pass | 7134566 |
![]() |
2023-01-11 21:36:02 | 2023-01-12 09:11:45 | 2023-01-12 09:32:45 | 0:21:00 | 0:15:12 | 0:05:48 | smithi | main | rhel | 8.6 | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/osd-delay objectstore/bluestore-low-osd-mem-target rados recovery-overrides/{more-async-recovery} supported-random-distro$/{rhel_8} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=lrc-k=4-m=2-l=3} | 3 |
pass | 7134539 |
![]() |
2023-01-11 21:35:46 | 2023-01-12 08:37:58 | 2023-01-12 09:11:43 | 0:33:45 | 0:26:29 | 0:07:16 | smithi | main | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools agent/off mon_election/classic task/test_orch_cli_mon} | 5 |
pass | 7134504 |
![]() |
2023-01-11 21:35:31 | 2023-01-12 08:15:22 | 2023-01-12 08:38:26 | 0:23:04 | 0:14:24 | 0:08:40 | smithi | main | centos | 8.stream | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/few objectstore/bluestore-low-osd-mem-target rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 |
pass | 7134476 |
![]() |
2023-01-11 21:35:20 | 2023-01-12 07:52:40 | 2023-01-12 08:17:06 | 0:24:26 | 0:16:10 | 0:08:16 | smithi | main | rhel | 8.6 | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/osd-dispatch-delay objectstore/bluestore-comp-zstd rados recovery-overrides/{more-active-recovery} supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 |
pass | 7134445 |
![]() |
2023-01-11 21:35:07 | 2023-01-12 07:32:29 | 2023-01-12 07:52:59 | 0:20:30 | 0:14:19 | 0:06:11 | smithi | main | rhel | 8.6 | rados/singleton/{all/dump-stuck mon_election/classic msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8}} | 1 |
pass | 7134388 |
![]() |
2023-01-11 21:34:45 | 2023-01-12 07:32:21 | 1603 | smithi | main | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools agent/off mon_election/classic task/test_orch_cli_mon} | 5 | |||
fail | 7133652 |
![]() ![]() |
2023-01-11 19:59:50 | 2023-01-12 11:41:41 | 2023-01-12 12:07:15 | 0:25:34 | 0:19:03 | 0:06:31 | smithi | main | rhel | 8.6 | orch:cephadm/osds/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-ops/repave-all} | 2 |
Failure Reason:
timeout expired in wait_until_healthy |
||||||||||||||
fail | 7133464 |
![]() ![]() |
2023-01-11 15:57:57 | 2023-01-11 17:58:18 | 2023-01-11 18:49:07 | 0:50:49 | 0:39:48 | 0:11:01 | smithi | main | centos | 8.stream | rgw/verify/{0-install centos_latest clusters/fixed-2 datacache/no_datacache frontend/beast ignore-pg-availability inline-data$/{off} msgr-failures/few objectstore/filestore-xfs overrides proto/http rgw_pool_type/ec-profile s3tests-branch sharding$/{single} striping$/{stripe-equals-chunk} tasks/{cls ragweed reshard s3tests-java s3tests} validater/lockdep} | 2 |
Failure Reason:
Command failed on smithi071 with status 1: 'cd /home/ubuntu/cephtest/s3-tests-java && /opt/gradle/gradle/bin/gradle clean test --rerun-tasks --no-build-cache --tests ObjectTest' |
||||||||||||||
dead | 7133375 |
![]() |
2023-01-11 12:42:01 | 2023-01-11 18:48:18 | 2023-01-12 07:00:50 | 12:12:32 | smithi | main | rhel | 8.6 | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-bitmap rados recovery-overrides/{default} supported-random-distro$/{rhel_8} thrashers/default thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 | ||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 7133314 |
![]() |
2023-01-11 12:40:05 | 2023-01-11 16:18:17 | 2023-01-11 18:01:53 | 1:43:36 | 1:31:16 | 0:12:20 | smithi | main | centos | 8.stream | rbd/migration/{1-base/install 2-clusters/{fixed-3 openstack} 3-objectstore/bluestore-comp-zstd 4-supported-random-distro$/{centos_8} 5-pool/replicated-data-pool 6-prepare/raw-file 7-io-workloads/qemu_xfstests 8-migrate-workloads/execute 9-cleanup/cleanup} | 3 |
Failure Reason:
"2023-01-11T16:38:59.041811+0000 mgr.x (mgr.4113) 1 : cluster [ERR] Failed to load ceph-mgr modules: prometheus" in cluster log |
||||||||||||||
pass | 7133300 |
![]() |
2023-01-11 12:39:49 | 2023-01-11 15:59:00 | 2023-01-11 16:22:24 | 0:23:24 | 0:14:49 | 0:08:35 | smithi | main | rhel | 8.4 | rbd/nbd/{base/install cluster/{fixed-3 openstack} msgr-failures/few objectstore/bluestore-hybrid supported-random-distro$/{rhel_8} thrashers/default thrashosds-health workloads/rbd_nbd} | 3 |
pass | 7133280 |
![]() |
2023-01-11 12:39:28 | 2023-01-11 15:27:02 | 2023-01-11 16:00:16 | 0:33:14 | 0:26:03 | 0:07:11 | smithi | main | centos | 8.stream | rbd/cli_v1/{base/install clusters/{fixed-1 openstack} features/format-1 msgr-failures/few objectstore/bluestore-comp-zstd pool/none supported-random-distro$/{centos_8} workloads/rbd_cli_generic} | 1 |
pass | 7133200 |
![]() |
2023-01-11 12:38:00 | 2023-01-11 12:38:16 | 2023-01-11 15:26:52 | 2:48:36 | 2:32:57 | 0:15:39 | smithi | main | ubuntu | 20.04 | rbd/maintenance/{base/install clusters/{fixed-3 openstack} objectstore/bluestore-comp-zlib qemu/xfstests supported-random-distro$/{ubuntu_latest} workloads/rebuild_object_map} | 3 |
fail | 7133144 |
![]() ![]() |
2023-01-11 11:45:07 | 2023-01-11 11:45:07 | 2023-01-11 12:05:34 | 0:20:27 | 0:09:02 | 0:11:25 | smithi | main | centos | 8.stream | smoke/basic/{clusters/{fixed-3-cephfs openstack} objectstore/bluestore-bitmap supported-random-distro$/{centos_8} tasks/{0-install test/rados_ec_snaps}} | 3 |
Failure Reason:
Command failed on smithi077 with status 1: 'sudo yum -y install cephfs-java' |
||||||||||||||
pass | 7133109 |
![]() |
2023-01-10 21:27:29 | 2023-01-10 21:27:56 | 2023-01-10 22:06:31 | 0:38:35 | 0:24:24 | 0:14:11 | smithi | main | ubuntu | 20.04 | smoke/basic/{clusters/{fixed-3-cephfs openstack} objectstore/bluestore-bitmap supported-random-distro$/{ubuntu_latest} tasks/{0-install test/rados_workunit_loadgen_mix}} | 3 |
fail | 7133045 |
![]() |
2023-01-10 15:14:42 | 2023-01-10 15:21:25 | 2023-01-10 16:13:42 | 0:52:17 | 0:43:09 | 0:09:08 | smithi | main | centos | 8.stream | crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} centos_latest clusters/{fixed-2} crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/small-objects} | 2 |
Failure Reason:
Command crashed: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 400000 --objects 1024 --max-in-flight 64 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 600 --op read 100 --op write 50 --op delete 50 --op snap_create 0 --op snap_remove 0 --op rollback 0 --op setattr 25 --op rmattr 25 --op copy_from 0 --op write_excl 50 --pool unique_pool_0' |
||||||||||||||
pass | 7132995 |
![]() |
2023-01-10 14:07:23 | 2023-01-10 14:48:59 | 2023-01-10 15:22:35 | 0:33:36 | 0:20:24 | 0:13:12 | smithi | main | ubuntu | 20.04 | fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-bitmap overrides/{frag ignorelist_health ignorelist_wrongly_marked_down prefetch_dirfrags/yes prefetch_entire_dirfrags/no races session_timeout thrashosds-health} ranks/1 tasks/{1-thrash/osd 2-workunit/suites/iozone}} | 2 |