Name | Machine Type | Up | Locked | Locked Since | Locked By | OS Type | OS Version | Arch | Description |
---|---|---|---|---|---|---|---|---|---|
smithi043.front.sepia.ceph.com | smithi | True | True | 2023-01-22 12:01:38.572083 | jenkins-build@teuthology | ubuntu | 20.04 | x86_64 | Locked to capture FOG image for Jenkins build 713 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
pass | 7133042 |
![]() |
2023-01-22 01:25:58 | 2023-01-22 07:13:55 | 2023-01-22 11:43:46 | 4:29:51 | 4:19:01 | 0:10:50 | smithi | main | ubuntu | 20.04 | upgrade:pacific-p2p/pacific-p2p-stress-split/{0-cluster/{openstack start} 1-ceph-install/pacific 1.1.short_pg_log 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{fsx radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 5-finish-upgrade 6-final-workload/{rbd-python snaps-many-objects} objectstore/bluestore-comp supported-all-distro/ubuntu_latest thrashosds-health} | 3 |
pass | 7133018 |
![]() |
2023-01-21 22:22:22 | 2023-01-22 02:17:45 | 2023-01-22 02:38:51 | 0:21:06 | 0:15:03 | 0:06:03 | smithi | main | rhel | 8.6 | rgw/singleton/{all/radosgw-admin frontend/beast ignore-pg-availability objectstore/filestore-xfs overrides rgw_pool_type/ec supported-random-distro$/{rhel_8}} | 2 |
fail | 7132965 |
![]() ![]() |
2023-01-21 18:19:57 | 2023-01-22 01:16:53 | 2023-01-22 01:36:42 | 0:19:49 | 0:08:18 | 0:11:31 | smithi | main | ubuntu | 20.04 | rgw/multifs/{clusters/fixed-2 frontend/beast ignore-pg-availability objectstore/filestore-xfs overrides rgw_pool_type/replicated s3tests-branch tasks/rgw_ragweed ubuntu_latest} | 2 |
Failure Reason:
Command failed on smithi043 with status 1: 'cd /home/ubuntu/cephtest/ragweed && ./bootstrap' |
||||||||||||||
pass | 7132918 |
![]() |
2023-01-21 18:19:05 | 2023-01-22 00:47:55 | 2023-01-22 01:18:33 | 0:30:38 | 0:22:00 | 0:08:38 | smithi | main | centos | 8.stream | rgw/crypt/{0-cluster/fixed-1 1-ceph-install/install 2-kms/barbican 3-rgw/rgw 4-tests/{s3tests} ignore-pg-availability s3tests-branch supported-random-distro$/{centos_8}} | 1 |
pass | 7132865 |
![]() |
2023-01-21 18:04:41 | 2023-01-21 20:39:49 | 2023-01-21 21:11:10 | 0:31:21 | 0:22:33 | 0:08:48 | smithi | main | centos | 8.stream | rados/singleton/{all/thrash-rados/{thrash-rados thrashosds-health} mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8}} | 2 |
pass | 7132805 |
![]() |
2023-01-21 18:03:34 | 2023-01-21 20:12:27 | 2023-01-21 20:39:51 | 0:27:24 | 0:17:16 | 0:10:08 | smithi | main | centos | 8.stream | rados/cephadm/smoke/{0-distro/centos_8.stream_container_tools 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} | 2 |
pass | 7132763 |
![]() |
2023-01-21 18:02:47 | 2023-01-21 19:51:41 | 2023-01-21 20:12:56 | 0:21:15 | 0:14:55 | 0:06:20 | smithi | main | rhel | 8.6 | rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/mon-delay msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{rhel_8} thrashers/sync workloads/rados_5925} | 2 |
pass | 7132734 |
![]() |
2023-01-21 18:02:15 | 2023-01-21 19:33:30 | 2023-01-21 19:52:18 | 0:18:48 | 0:08:16 | 0:10:32 | smithi | main | ubuntu | 20.04 | rados/perf/{ceph mon_election/classic objectstore/bluestore-bitmap openstack scheduler/dmclock_default_shards settings/optimized ubuntu_latest workloads/radosbench_4M_rand_read} | 1 |
pass | 7132709 |
![]() |
2023-01-21 18:01:47 | 2023-01-21 19:19:31 | 2023-01-21 19:34:35 | 0:15:04 | 0:06:00 | 0:09:04 | smithi | main | ubuntu | 20.04 | rados/cephadm/workunits/{0-distro/ubuntu_20.04 agent/on mon_election/connectivity task/test_cephadm_repos} | 1 |
fail | 7132683 |
![]() ![]() |
2023-01-21 18:01:18 | 2023-01-21 19:03:31 | 2023-01-21 19:19:09 | 0:15:38 | 0:05:26 | 0:10:12 | smithi | main | ubuntu | 20.04 | rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/radosbench cluster/1-node k8s/1.21 net/calico rook/1.7.2} | 1 |
Failure Reason:
Command failed on smithi043 with status 1: 'sudo systemctl enable --now kubelet && sudo kubeadm config images pull' |
||||||||||||||
pass | 7132654 |
![]() |
2023-01-21 18:00:45 | 2023-01-21 18:42:10 | 2023-01-21 19:03:15 | 0:21:05 | 0:15:26 | 0:05:39 | smithi | main | rhel | 8.6 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{default} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8} thrashers/pggrow thrashosds-health workloads/dedup-io-mixed} | 2 |
fail | 7132570 |
![]() ![]() |
2023-01-21 16:33:30 | 2023-01-22 04:02:35 | 2023-01-22 04:44:35 | 0:42:00 | 0:32:12 | 0:09:48 | smithi | main | centos | 8.stream | rgw/verify/{0-install centos_latest clusters/fixed-2 datacache/no_datacache frontend/beast ignore-pg-availability inline-data$/{on} msgr-failures/few objectstore/bluestore-bitmap overrides proto/https rgw_pool_type/replicated s3tests-branch sharding$/{default} striping$/{stripe-greater-than-chunk} tasks/{cls ragweed reshard s3tests-java s3tests} validater/valgrind} | 2 |
Failure Reason:
Command failed (ragweed tests against rgw) on smithi043 with status 1: "RAGWEED_CONF=/home/ubuntu/cephtest/archive/ragweed.client.0.conf RAGWEED_STAGES=prepare,check BOTO_CONFIG=/home/ubuntu/cephtest/boto.cfg /home/ubuntu/cephtest/ragweed/virtualenv/bin/python -m nose -w /home/ubuntu/cephtest/ragweed -v -a '!fails_on_rgw'" |
||||||||||||||
pass | 7132525 |
![]() |
2023-01-21 16:32:39 | 2023-01-22 03:36:29 | 2023-01-22 04:03:16 | 0:26:47 | 0:15:53 | 0:10:54 | smithi | main | ubuntu | 20.04 | rgw/thrash/{clusters/fixed-2 frontend/beast install objectstore/bluestore-bitmap s3tests-branch thrasher/default thrashosds-health ubuntu_latest workload/rgw_bucket_quota} | 2 |
fail | 7132472 |
![]() ![]() |
2023-01-21 16:14:05 | 2023-01-22 00:10:32 | 2023-01-22 00:47:51 | 0:37:19 | 0:26:00 | 0:11:19 | smithi | main | centos | 8.stream | fs/thrash/workloads/{begin clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount overrides/{distro/testing/{flavor/centos_latest k-testing} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp-ec-root overrides/{frag ignorelist_health ignorelist_wrongly_marked_down prefetch_dirfrags/no races session_timeout thrashosds-health} ranks/1 tasks/{1-thrash/mds 2-workunit/fs/snaps}} | 2 |
Failure Reason:
Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi043 with status 128: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=bcbf88bee4969f40f7fc319ee08e4d88e17faf44 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/fs/snaps/snaptest-git-ceph.sh' |
||||||||||||||
pass | 7132421 |
![]() |
2023-01-21 16:13:09 | 2023-01-21 23:29:41 | 2023-01-22 00:10:52 | 0:41:11 | 0:29:55 | 0:11:16 | smithi | main | centos | 8.stream | fs/multifs/{begin clusters/1a3s-mds-2c-client conf/{client mds mon osd} distro/{centos_8} mount/fuse objectstore-ec/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down mon-debug} tasks/multifs-auth} | 2 |
pass | 7132393 |
![]() |
2023-01-21 16:12:37 | 2023-01-21 23:10:20 | 2023-01-21 23:31:49 | 0:21:29 | 0:14:09 | 0:07:20 | smithi | main | rhel | 8.4 | fs/workload/{begin clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/3 scrub/no standby-replay tasks/{0-check-counter workunit/direct_io} wsync/{no}} | 3 |
pass | 7132362 |
![]() |
2023-01-21 16:12:02 | 2023-01-21 22:43:56 | 2023-01-21 23:10:24 | 0:26:28 | 0:15:08 | 0:11:20 | smithi | main | ubuntu | 20.04 | fs/functional/{begin clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount overrides/{distro/testing/{flavor/ubuntu_latest k-testing} ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/mds-flush} | 2 |
pass | 7132321 |
![]() |
2023-01-21 14:24:08 | 2023-01-22 04:44:02 | 2023-01-22 07:11:42 | 2:27:40 | 2:14:27 | 0:13:13 | smithi | main | ubuntu | 18.04 | upgrade:nautilus-x/parallel/{0-cluster/{openstack start} 1-ceph-install/nautilus 1.1-pg-log-overrides/short_pg_log 2-workload/{blogbench ec-rados-default rados_api rados_loadgenbig rgw_ragweed_prepare test_rbd_api test_rbd_python} 3-upgrade-sequence/upgrade-all 4-pacific 5-final-workload/{blogbench rados-snaps-few-objects rados_loadgenmix rados_mon_thrash rbd_cls rbd_import_export rgw rgw_ragweed_check} mon_election/connectivity objectstore/bluestore-bitmap ubuntu_18.04} | 4 |
pass | 7132218 |
![]() |
2023-01-21 05:39:09 | 2023-01-22 02:59:47 | 2023-01-22 03:37:08 | 0:37:21 | 0:27:05 | 0:10:16 | smithi | main | centos | 8.stream | orch:cephadm/with-work/{0-distro/centos_8.stream_container_tools fixed-2 mode/root mon_election/connectivity msgr/async-v1only start tasks/rados_python} | 2 |
fail | 7132184 |
![]() ![]() |
2023-01-21 05:38:36 | 2023-01-22 02:38:34 | 2023-01-22 02:59:58 | 0:21:24 | 0:14:44 | 0:06:40 | smithi | main | rhel | 8.6 | orch:cephadm/with-work/{0-distro/rhel_8.6_container_tools_rhel8 fixed-2 mode/packaged mon_election/classic msgr/async start tasks/rados_api_tests} | 2 |
Failure Reason:
Command failed on smithi086 with status 127: 'sudo cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:7f9335998fc8f59d47d102b1022aa6e9451c9e41 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 63824de8-9a00-11ed-9e55-001a4aab830c -- ceph mon dump -f json' |