Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 7293868 2023-06-01 17:49:08 2023-06-01 17:49:54 2023-06-01 18:28:54 0:39:00 0:25:36 0:13:24 smithi main ubuntu 20.04 rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/few rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{ubuntu_20.04} thrashers/minsize_recovery thrashosds-health workloads/ec-pool-snaps-few-objects-overwrites} 2
pass 7293869 2023-06-01 17:49:09 2023-06-01 17:49:54 2023-06-01 18:16:47 0:26:53 0:16:33 0:10:20 smithi main rhel 8.6 rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/disable mon_election/classic random-objectstore$/{bluestore-comp-snappy} supported-random-distro$/{rhel_8} tasks/crash} 2
dead 7293870 2023-06-01 17:49:10 2023-06-01 17:49:54 2023-06-02 06:01:24 12:11:30 smithi main rhel 8.6 rados/singleton-nomsgr/{all/admin_socket_output mon_election/classic rados supported-random-distro$/{rhel_8}} 1
Failure Reason:

hit max job timeout

pass 7293871 2023-06-01 17:49:10 2023-06-01 17:49:55 2023-06-01 18:26:28 0:36:33 0:22:46 0:13:47 smithi main ubuntu 22.04 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{ubuntu_latest} thrashers/morepggrow thrashosds-health workloads/small-objects-balanced} 2
fail 7293872 2023-06-01 17:49:11 2023-06-01 17:49:55 2023-06-01 18:22:59 0:33:04 0:20:59 0:12:05 smithi main centos 8.stream rados/dashboard/{0-single-container-host debug/mgr mon_election/connectivity random-objectstore$/{bluestore-comp-lz4} tasks/e2e} 2
Failure Reason:

Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi121 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=85bd3c8bfbc2d8a6893339ebab01cb60f8d7a546 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'

pass 7293873 2023-06-01 17:49:12 2023-06-01 17:49:56 2023-06-01 18:19:36 0:29:40 0:17:16 0:12:24 smithi main ubuntu 20.04 rados/singleton/{all/pg-autoscaler-progress-off mon_election/connectivity msgr-failures/none msgr/async-v1only objectstore/bluestore-comp-zstd rados supported-random-distro$/{ubuntu_20.04}} 2
pass 7293874 2023-06-01 17:49:13 2023-06-01 17:49:56 2023-06-01 18:13:57 0:24:01 0:15:37 0:08:24 smithi main rhel 8.6 rados/singleton-nomsgr/{all/lazy_omap_stats_output mon_election/classic rados supported-random-distro$/{rhel_8}} 1
pass 7293875 2023-06-01 17:49:13 2023-06-01 17:49:56 2023-06-01 18:31:33 0:41:37 0:31:34 0:10:03 smithi main ubuntu 20.04 rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_20.04} tasks/rados_workunit_loadgen_big} 2
pass 7293876 2023-06-01 17:49:14 2023-06-01 17:49:57 2023-06-01 18:24:31 0:34:34 0:23:52 0:10:42 smithi main ubuntu 22.04 rados/perf/{ceph mon_election/connectivity objectstore/bluestore-stupid openstack scheduler/dmclock_default_shards settings/optimized ubuntu_latest workloads/radosbench_omap_write} 1
pass 7293877 2023-06-01 17:49:15 2023-06-01 17:49:57 2023-06-01 18:19:25 0:29:28 0:20:44 0:08:44 smithi main ubuntu 22.04 rados/singleton-nomsgr/{all/osd_stale_reads mon_election/classic rados supported-random-distro$/{ubuntu_latest}} 1
fail 7293878 2023-06-01 17:49:16 2023-06-01 17:49:57 2023-06-01 18:33:17 0:43:20 0:24:59 0:18:21 smithi main centos 8.stream rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/2-size-2-min-size 1-install/pacific backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/crush-compat mon_election/classic msgr-failures/few rados thrashers/mapgap thrashosds-health workloads/rbd_cls} 3
Failure Reason:

Command failed on smithi124 with status 1: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'ceph_test_cls_rbd --gtest_filter=-TestClsRbd.get_features:TestClsRbd.parents:TestClsRbd.mirror'"

pass 7293879 2023-06-01 17:49:16 2023-06-01 17:49:58 2023-06-01 18:25:30 0:35:32 0:21:19 0:14:13 smithi main ubuntu 20.04 rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/fastclose rados recovery-overrides/{more-async-recovery} supported-random-distro$/{ubuntu_20.04} thrashers/default thrashosds-health workloads/ec-small-objects-fast-read-overwrites} 2
dead 7293880 2023-06-01 17:49:17 2023-06-01 17:49:58 2023-06-02 06:02:08 12:12:10 smithi main ubuntu 20.04 rados/singleton-nomsgr/{all/admin_socket_output mon_election/connectivity rados supported-random-distro$/{ubuntu_20.04}} 1
Failure Reason:

hit max job timeout

fail 7293881 2023-06-01 17:49:18 2023-06-01 17:49:58 2023-06-01 18:15:04 0:25:06 0:10:34 0:14:32 smithi main ubuntu 20.04 rados/singleton/{all/test_envlibrados_for_rocksdb/{supported/centos_latest test_envlibrados_for_rocksdb} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{ubuntu_20.04}} 1
Failure Reason:

Command failed (workunit test rados/test_envlibrados_for_rocksdb.sh) on smithi023 with status 2: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=85bd3c8bfbc2d8a6893339ebab01cb60f8d7a546 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_envlibrados_for_rocksdb.sh'

pass 7293882 2023-06-01 17:49:19 2023-06-01 17:49:59 2023-06-01 18:16:54 0:26:55 0:13:10 0:13:45 smithi main ubuntu 20.04 rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/connectivity msgr-failures/few objectstore/bluestore-comp-zstd rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{ubuntu_20.04} thrashers/default thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4
fail 7293883 2023-06-01 17:49:19 2023-06-01 17:49:59 2023-06-01 18:10:40 0:20:41 0:12:28 0:08:13 smithi main centos 8.stream rados/singleton/{all/test_envlibrados_for_rocksdb/{supported/rhel_latest test_envlibrados_for_rocksdb} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8}} 1
Failure Reason:

Command failed (workunit test rados/test_envlibrados_for_rocksdb.sh) on smithi060 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=85bd3c8bfbc2d8a6893339ebab01cb60f8d7a546 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_envlibrados_for_rocksdb.sh'

fail 7293884 2023-06-01 17:49:20 2023-06-01 17:49:59 2023-06-01 18:19:39 0:29:40 0:20:48 0:08:52 smithi main rhel 8.6 rados/cephadm/workunits/{0-distro/rhel_8.6_container_tools_rhel8 agent/on mon_election/connectivity task/test_cephadm} 1
Failure Reason:

SELinux denials found on ubuntu@smithi089.front.sepia.ceph.com: ['type=AVC msg=audit(1685643429.546:20166): avc: denied { ioctl } for pid=110378 comm="iptables" path="/var/lib/containers/storage/overlay/52b35f311d71e688a3bb25a68810d67968a040d097ce323e29472a71b7894513/merged" dev="overlay" ino=3543549 scontext=system_u:system_r:iptables_t:s0 tcontext=system_u:object_r:container_file_t:s0:c1022,c1023 tclass=dir permissive=1', 'type=AVC msg=audit(1685643429.497:20165): avc: denied { ioctl } for pid=110374 comm="iptables" path="/var/lib/containers/storage/overlay/52b35f311d71e688a3bb25a68810d67968a040d097ce323e29472a71b7894513/merged" dev="overlay" ino=3543549 scontext=system_u:system_r:iptables_t:s0 tcontext=system_u:object_r:container_file_t:s0:c1022,c1023 tclass=dir permissive=1']

fail 7293885 2023-06-01 17:49:21 2023-06-01 17:50:00 2023-06-01 18:27:15 0:37:15 0:23:54 0:13:21 smithi main centos 8.stream rados/dashboard/{0-single-container-host debug/mgr mon_election/classic random-objectstore$/{bluestore-stupid} tasks/e2e} 2
Failure Reason:

Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi090 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=85bd3c8bfbc2d8a6893339ebab01cb60f8d7a546 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'

pass 7293886 2023-06-01 17:49:22 2023-06-01 17:50:00 2023-06-01 18:15:00 0:25:00 0:12:20 0:12:40 smithi main ubuntu 20.04 rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/disable mon_election/connectivity random-objectstore$/{bluestore-hybrid} supported-random-distro$/{ubuntu_20.04} tasks/crash} 2