Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 6539338 2021-12-02 11:11:43 2021-12-02 11:12:01 2021-12-02 11:45:43 0:33:42 0:23:13 0:10:29 smithi master centos 8.3 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-lz4 rados tasks/mon_recovery validater/lockdep} 2
Failure Reason:

Command failed on smithi142 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd dump --format=json'

fail 6539339 2021-12-02 11:11:43 2021-12-02 11:12:31 2021-12-02 11:31:57 0:19:26 0:13:00 0:06:26 smithi master rhel 8.3 rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8} thrashers/force-sync-many workloads/rados_mon_workunits} 2
Failure Reason:

Command failed on smithi087 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

pass 6539340 2021-12-02 11:11:44 2021-12-02 11:12:42 2021-12-02 11:34:04 0:21:22 0:10:31 0:10:51 smithi master centos 8.3 rados/multimon/{clusters/6 mon_election/classic msgr-failures/few msgr/async-v2only no_pools objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8} tasks/mon_clock_with_skews} 2
fail 6539341 2021-12-02 11:11:45 2021-12-02 11:12:52 2021-12-02 11:31:47 0:18:55 0:12:52 0:06:03 smithi master rhel 8.3 rados/singleton/{all/ec-lost-unfound mon_election/classic msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{rhel_8}} 1
Failure Reason:

Command failed on smithi135 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

fail 6539342 2021-12-02 11:11:46 2021-12-02 11:12:53 2021-12-02 11:34:46 0:21:53 0:11:19 0:10:34 smithi master centos 8.2 rados/cephadm/thrash/{0-distro/centos_8.2_kubic_stable 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async-v2only root} 2
Failure Reason:

Command failed on smithi035 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:a65ef0faa6fa0b8e57f90d5468d21c22c2d86e31-crimson -v bootstrap --fsid 7626bea0-5363-11ec-8c2e-001a4aab830c --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id y --orphan-initial-daemons --skip-monitoring-stack --mon-ip 172.21.15.35 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring'

dead 6539343 2021-12-02 11:11:47 2021-12-02 11:13:13 2021-12-02 23:23:12 12:09:59 smithi master centos 8.3 rados/singleton/{all/lost-unfound mon_election/connectivity msgr-failures/many msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{centos_8}} 1
Failure Reason:

hit max job timeout

fail 6539344 2021-12-02 11:11:47 2021-12-02 11:15:04 2021-12-02 14:11:08 2:56:04 2:45:55 0:10:09 smithi master centos 8.3 rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{centos_8} tasks/rados_stress_watch} 2
Failure Reason:

Command failed (workunit test rados/stress_watch.sh) on smithi074 with status 134: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=1b18e07603ebfbd16486528f11d2a761732592a5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/stress_watch.sh'

pass 6539345 2021-12-02 11:11:48 2021-12-02 11:15:45 2021-12-02 11:37:43 0:21:58 0:10:31 0:11:27 smithi master centos 8.3 rados/multimon/{clusters/3 mon_election/connectivity msgr-failures/many msgr/async-v2only no_pools objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8} tasks/mon_clock_with_skews} 2
fail 6539346 2021-12-02 11:11:49 2021-12-02 11:16:05 2021-12-02 11:40:10 0:24:05 0:15:11 0:08:54 smithi master rhel 8.3 rados/cephadm/with-work/{0-distro/rhel_8.3_kubic_stable fixed-2 mode/root mon_election/connectivity msgr/async-v2only start tasks/rados_api_tests} 2
Failure Reason:

Command failed on smithi114 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

fail 6539347 2021-12-02 11:11:50 2021-12-02 11:18:56 2021-12-02 11:40:28 0:21:32 0:11:15 0:10:17 smithi master centos 8.2 rados/cephadm/thrash/{0-distro/centos_8.2_kubic_stable 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async-v2only root} 2
Failure Reason:

Command failed on smithi038 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:a65ef0faa6fa0b8e57f90d5468d21c22c2d86e31-crimson -v bootstrap --fsid 3fc39472-5364-11ec-8c2e-001a4aab830c --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id y --orphan-initial-daemons --skip-monitoring-stack --mon-ip 172.21.15.38 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring'

dead 6539348 2021-12-02 11:11:51 2021-12-02 11:19:17 2021-12-02 23:28:31 12:09:14 smithi master centos 8.3 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-active-recovery} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-dispatch-delay msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/small-objects-balanced} 2
Failure Reason:

hit max job timeout

fail 6539349 2021-12-02 11:11:52 2021-12-02 11:19:37 2021-12-02 11:50:38 0:31:01 0:21:21 0:09:40 smithi master centos 8.3 rados/singleton/{all/osd-recovery-incomplete mon_election/classic msgr-failures/many msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{centos_8}} 1
Failure Reason:

"2021-12-02T11:40:02.159722+0000 mon.a (mon.0) 126 : cluster [WRN] Health check failed: 2 slow ops, oldest one blocked for 33 sec, mon.a has slow ops (SLOW_OPS)" in cluster log

fail 6539350 2021-12-02 11:11:52 2021-12-02 11:19:38 2021-12-02 11:32:38 0:13:00 0:07:10 0:05:50 smithi master rhel 8.3 rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8} tasks/rados_workunit_loadgen_mix} 2
Failure Reason:

Command failed on smithi033 with status 1: 'sudo yum -y install ceph-radosgw ceph-test ceph ceph-base cephadm ceph-immutable-object-cache ceph-mgr ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-rook ceph-mgr-cephadm ceph-fuse librados-devel libcephfs2 libcephfs-devel librados2 librbd1 python3-rados python3-rgw python3-cephfs python3-rbd rbd-fuse rbd-mirror rbd-nbd sqlite-devel sqlite-devel sqlite-devel sqlite-devel'

fail 6539351 2021-12-02 11:11:53 2021-12-02 11:19:38 2021-12-02 11:46:46 0:27:08 0:15:02 0:12:06 smithi master centos 8.3 rados/singleton/{all/pg-autoscaler-progress-off mon_election/connectivity msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8}} 2
Failure Reason:

Command failed (workunit test mon/pg_autoscaler.sh) on smithi110 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=1b18e07603ebfbd16486528f11d2a761732592a5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/mon/pg_autoscaler.sh'

fail 6539352 2021-12-02 11:11:54 2021-12-02 11:21:29 2021-12-02 11:53:24 0:31:55 0:21:55 0:10:00 smithi master centos 8.3 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-stupid rados tasks/mon_recovery validater/lockdep} 2
Failure Reason:

Command failed on smithi093 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd dump --format=json'

fail 6539353 2021-12-02 11:11:55 2021-12-02 11:22:10 2021-12-02 11:43:36 0:21:26 0:11:41 0:09:45 smithi master centos 8.2 rados/cephadm/with-work/{0-distro/centos_8.2_kubic_stable fixed-2 mode/packaged mon_election/classic msgr/async-v2only start tasks/rados_python} 2
Failure Reason:

Command failed on smithi099 with status 1: 'sudo cephadm --image quay.ceph.io/ceph-ci/ceph:a65ef0faa6fa0b8e57f90d5468d21c22c2d86e31-crimson -v bootstrap --fsid c21dcffa-5364-11ec-8c2e-001a4aab830c --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id y --orphan-initial-daemons --skip-monitoring-stack --mon-ip 172.21.15.99 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring'

fail 6539354 2021-12-02 11:11:56 2021-12-02 11:22:21 2021-12-02 11:43:53 0:21:32 0:11:47 0:09:45 smithi master centos 8.3 rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8} tasks/repair_test} 2
fail 6539355 2021-12-02 11:11:56 2021-12-02 11:22:41 2021-12-02 11:50:31 0:27:50 0:16:56 0:10:54 smithi master centos 8.3 rados/singleton-bluestore/{all/cephtool mon_election/connectivity msgr-failures/none msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8}} 1
Failure Reason:

Command failed (workunit test cephtool/test.sh) on smithi118 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=1b18e07603ebfbd16486528f11d2a761732592a5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh'

fail 6539356 2021-12-02 11:11:57 2021-12-02 11:22:52 2021-12-02 11:43:49 0:20:57 0:14:58 0:05:59 smithi master rhel 8.3 rados/singleton/{all/test_envlibrados_for_rocksdb mon_election/classic msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8}} 1
Failure Reason:

Command failed on smithi042 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

fail 6539357 2021-12-02 11:11:58 2021-12-02 11:22:52 2021-12-02 11:42:45 0:19:53 0:13:35 0:06:18 smithi master rhel 8.3 rados/singleton/{all/thrash-rados/{thrash-rados thrashosds-health} mon_election/connectivity msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{rhel_8}} 2
Failure Reason:

Command failed on smithi192 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

dead 6539358 2021-12-02 11:11:59 2021-12-02 11:23:03 2021-12-02 23:33:38 12:10:35 smithi master centos 8.3 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-delay msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8} thrashers/none thrashosds-health workloads/dedup-io-mixed} 2
Failure Reason:

hit max job timeout

fail 6539359 2021-12-02 11:12:00 2021-12-02 11:23:44 2021-12-02 11:45:20 0:21:36 0:11:17 0:10:19 smithi master centos 8.2 rados/cephadm/thrash/{0-distro/centos_8.2_kubic_stable 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async-v2only root} 2
Failure Reason:

Command failed on smithi094 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:a65ef0faa6fa0b8e57f90d5468d21c22c2d86e31-crimson -v bootstrap --fsid f11b7ee2-5364-11ec-8c2e-001a4aab830c --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id y --orphan-initial-daemons --skip-monitoring-stack --mon-ip 172.21.15.94 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring'

fail 6539360 2021-12-02 11:12:01 2021-12-02 11:37:25 423 smithi master rhel 8.3 rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{rhel_8} tasks/rados_api_tests} 2
Failure Reason:

Command failed on smithi026 with status 1: 'sudo yum -y install ceph-radosgw ceph-test ceph ceph-base cephadm ceph-immutable-object-cache ceph-mgr ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-rook ceph-mgr-cephadm ceph-fuse librados-devel libcephfs2 libcephfs-devel librados2 librbd1 python3-rados python3-rgw python3-cephfs python3-rbd rbd-fuse rbd-mirror rbd-nbd sqlite-devel sqlite-devel sqlite-devel sqlite-devel'

fail 6539361 2021-12-02 11:12:01 2021-12-02 11:24:25 2021-12-02 11:45:47 0:21:22 0:13:48 0:07:34 smithi master rhel 8.3 rados/singleton/{all/admin-socket mon_election/classic msgr-failures/many msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{rhel_8}} 1
Failure Reason:

Command failed on smithi124 with status 1: 'sudo yum -y install ceph-mgr-cephadm'