User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
rishabh | 2023-01-23 16:50:37 | 2023-01-23 16:51:42 | 2023-01-24 05:15:32 | 12:23:50 | fs | main | smithi | 510284b | 10 | 11 | 5 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
pass | 7133725 | 2023-01-23 16:50:51 | 2023-01-23 16:51:37 | 2023-01-23 17:17:37 | 0:26:00 | 0:16:18 | 0:09:42 | smithi | main | rhel | 8.6 | fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/forward-scrub} | 2 | |
fail | 7133726 | 2023-01-23 16:50:51 | 2023-01-23 16:51:37 | 2023-01-23 17:30:10 | 0:38:33 | 0:28:28 | 0:10:05 | smithi | main | rhel | 8.6 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/secure wsync/no} objectstore-ec/bluestore-comp-ec-root omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/3 replication/default} standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/no 3-snaps/no 4-flush/yes 5-workunit/suites/pjd}} | 3 | |
Failure Reason:
Command failed (workunit test suites/pjd.sh) on smithi040 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=510284b66513490445619d1430aa869868c71a09 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/pjd.sh' |
||||||||||||||
pass | 7133727 | 2023-01-23 16:50:52 | 2023-01-23 16:51:38 | 2023-01-23 19:07:31 | 2:15:53 | 2:03:57 | 0:11:56 | smithi | main | centos | 8.stream | fs/thrash/multifs/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-2c-client conf/{client mds mon osd} distro/{centos_8} mount/fuse msgr-failures/osd-mds-delay objectstore/bluestore-bitmap overrides/{frag ignorelist_health ignorelist_wrongly_marked_down multifs session_timeout thrashosds-health} tasks/{1-thrash/mds 2-workunit/cfuse_workunit_snaptests}} | 2 | |
fail | 7133728 | 2023-01-23 16:50:53 | 2023-01-23 16:51:38 | 2023-01-23 17:25:14 | 0:33:36 | 0:20:09 | 0:13:27 | smithi | main | centos | 8.stream | fs/upgrade/featureful_client/upgraded_client/{bluestore-bitmap centos_latest clusters/1-mds-2-client-micro conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down multimds/no pg-warn} tasks/{0-octopus 1-client 2-upgrade 3-client-upgrade 4-compat_client 5-client-sanity}} | 3 | |
Failure Reason:
"2023-01-23T17:16:24.888199+0000 mgr.x (mgr.14101) 1 : cluster [ERR] Failed to load ceph-mgr modules: prometheus" in cluster log |
||||||||||||||
fail | 7133729 | 2023-01-23 16:50:53 | 2023-01-23 16:51:38 | 2023-01-23 18:29:15 | 1:37:37 | 1:23:30 | 0:14:07 | smithi | main | centos | 8.stream | fs/valgrind/{begin/{0-install 1-ceph 2-logrotate} centos_latest debug mirror/{cephfs-mirror/one-per-cluster clients/mirror cluster/1-node mount/fuse overrides/whitelist_health tasks/mirror}} | 1 | |
Failure Reason:
Test failure: test_cephfs_mirror_restart_sync_on_blocklist (tasks.cephfs.test_mirroring.TestMirroring) |
||||||||||||||
pass | 7133730 | 2023-01-23 16:50:54 | 2023-01-23 16:51:38 | 2023-01-23 18:00:20 | 1:08:42 | 0:59:04 | 0:09:38 | smithi | main | rhel | 8.6 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-comp omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/1 standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/no 5-workunit/suites/ffsb}} | 3 | |
pass | 7133731 | 2023-01-23 16:50:55 | 2023-01-23 16:51:39 | 2023-01-23 17:30:09 | 0:38:30 | 0:27:55 | 0:10:35 | smithi | main | rhel | 8.6 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/crc wsync/no} objectstore-ec/bluestore-ec-root omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/yes 5-workunit/fs/norstats}} | 3 | |
fail | 7133732 | 2023-01-23 16:50:55 | 2023-01-23 16:51:39 | 2023-01-24 00:01:08 | 7:09:29 | 6:58:17 | 0:11:12 | smithi | main | rhel | 8.6 | fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/fuse msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp-ec-root overrides/{frag ignorelist_health ignorelist_wrongly_marked_down prefetch_dirfrags/yes prefetch_entire_dirfrags/no races session_timeout thrashosds-health} ranks/5 tasks/{1-thrash/osd 2-workunit/fs/snaps}} | 2 | |
Failure Reason:
Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi110 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=510284b66513490445619d1430aa869868c71a09 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/fs/snaps/snaptest-git-ceph.sh' |
||||||||||||||
fail | 7133733 | 2023-01-23 16:50:56 | 2023-01-23 16:51:39 | 2023-01-23 23:40:26 | 6:48:47 | 6:33:04 | 0:15:43 | smithi | main | rhel | 8.6 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/legacy wsync/yes} objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/5 replication/default} standby-replay tasks/{0-subvolume/{with-namespace-isolated} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/no 5-workunit/suites/fsstress}} | 3 | |
Failure Reason:
Command failed (workunit test suites/fsstress.sh) on smithi002 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=510284b66513490445619d1430aa869868c71a09 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/fsstress.sh' |
||||||||||||||
fail | 7133734 | 2023-01-23 16:50:57 | 2023-01-23 16:51:40 | 2023-01-23 17:35:45 | 0:44:05 | 0:34:35 | 0:09:30 | smithi | main | rhel | 8.6 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/secure wsync/no} objectstore-ec/bluestore-comp-ec-root omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/5 replication/always} standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/yes 5-workunit/suites/fsx}} | 3 | |
Failure Reason:
error during scrub thrashing: rank damage found: {'backtrace'} |
||||||||||||||
pass | 7133735 | 2023-01-23 16:50:57 | 2023-01-23 16:51:40 | 2023-01-23 17:29:35 | 0:37:55 | 0:26:13 | 0:11:42 | smithi | main | rhel | 8.6 | fs/thrash/multifs/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-2c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore/bluestore-bitmap overrides/{frag ignorelist_health ignorelist_wrongly_marked_down multifs session_timeout thrashosds-health} tasks/{1-thrash/mds 2-workunit/ffsb}} | 2 | |
fail | 7133736 | 2023-01-23 16:50:58 | 2023-01-23 16:51:40 | 2023-01-23 17:23:39 | 0:31:59 | 0:19:36 | 0:12:23 | smithi | main | centos | 8.stream | fs/upgrade/featureful_client/upgraded_client/{bluestore-bitmap centos_latest clusters/1-mds-2-client-micro conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down multimds/no pg-warn} tasks/{0-octopus 1-client 2-upgrade 3-client-upgrade 4-compat_client 5-client-sanity}} | 3 | |
Failure Reason:
"2023-01-23T17:15:02.138142+0000 mgr.x (mgr.14100) 1 : cluster [ERR] Failed to load ceph-mgr modules: prometheus" in cluster log |
||||||||||||||
fail | 7133737 | 2023-01-23 16:50:59 | 2023-01-23 16:51:41 | 2023-01-23 17:34:14 | 0:42:33 | 0:33:37 | 0:08:56 | smithi | main | rhel | 8.6 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-comp omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/5 replication/always} standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/no 5-workunit/suites/pjd}} | 3 | |
Failure Reason:
Command failed (workunit test suites/pjd.sh) on smithi035 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=510284b66513490445619d1430aa869868c71a09 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/pjd.sh' |
||||||||||||||
dead | 7133738 | 2023-01-23 16:50:59 | 2023-01-23 16:51:41 | 2023-01-24 05:15:32 | 12:23:51 | smithi | main | rhel | 8.6 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/secure wsync/no} objectstore-ec/bluestore-comp-ec-root omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{no-subvolume} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/yes 5-workunit/kernel_untar_build}} | 3 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
dead | 7133739 | 2023-01-23 16:51:00 | 2023-01-23 16:51:41 | 2023-01-24 05:15:06 | 12:23:25 | smithi | main | rhel | 8.6 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/crc wsync/no} objectstore-ec/bluestore-ec-root omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/5 replication/always} standby-replay tasks/{0-subvolume/{with-namespace-isolated} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/yes 5-workunit/suites/blogbench}} | 3 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 7133740 | 2023-01-23 16:51:01 | 2023-01-23 16:51:42 | 2023-01-23 17:46:58 | 0:55:16 | 0:43:53 | 0:11:23 | smithi | main | rhel | 8.6 | fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/client-recovery} | 2 | |
dead | 7133741 | 2023-01-23 16:51:01 | 2023-01-23 16:51:42 | 2023-01-23 17:15:53 | 0:24:11 | smithi | main | rhel | 8.6 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/legacy wsync/yes} objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/3 replication/default} standby-replay tasks/{0-subvolume/{with-quota} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/no 5-workunit/suites/dbench}} | 3 | |||
Failure Reason:
Error reimaging machines: reached maximum tries (100) after waiting for 600 seconds |
||||||||||||||
pass | 7133742 | 2023-01-23 16:51:02 | 2023-01-23 16:51:42 | 2023-01-23 17:46:42 | 0:55:00 | 0:49:43 | 0:05:17 | smithi | main | rhel | 8.6 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/secure wsync/no} objectstore-ec/bluestore-comp-ec-root omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/1 standby-replay tasks/{0-subvolume/{no-subvolume} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/yes 5-workunit/suites/ffsb}} | 3 | |
fail | 7133743 | 2023-01-23 16:51:03 | 2023-01-23 16:51:43 | 2023-01-23 17:35:12 | 0:43:29 | 0:24:28 | 0:19:01 | smithi | main | ubuntu | 20.04 | fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/damage} | 2 | |
Failure Reason:
Test failure: test_open_ino_errors (tasks.cephfs.test_damage.TestDamage) |
||||||||||||||
fail | 7133744 | 2023-01-23 16:51:03 | 2023-01-23 16:51:43 | 2023-01-23 17:32:12 | 0:40:29 | 0:27:48 | 0:12:41 | smithi | main | rhel | 8.6 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/crc wsync/no} objectstore-ec/bluestore-ec-root omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/5 replication/default} standby-replay tasks/{0-subvolume/{with-quota} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/yes 5-workunit/suites/fsstress}} | 3 | |
Failure Reason:
error during scrub thrashing: rank damage found: {'backtrace'} |
||||||||||||||
pass | 7133745 | 2023-01-23 16:51:04 | 2023-01-23 16:51:43 | 2023-01-23 17:48:43 | 0:57:00 | 0:39:54 | 0:17:06 | smithi | main | rhel | 8.6 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/legacy wsync/yes} objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/5 replication/always} standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/no 5-workunit/suites/fsx}} | 3 | |
pass | 7133746 | 2023-01-23 16:51:05 | 2023-01-23 16:51:44 | 2023-01-23 17:26:15 | 0:34:31 | 0:23:41 | 0:10:50 | smithi | main | rhel | 8.6 | fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-comp overrides/{frag ignorelist_health ignorelist_wrongly_marked_down prefetch_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health} ranks/3 tasks/{1-thrash/mon 2-workunit/suites/pjd}} | 2 | |
dead | 7133747 | 2023-01-23 16:51:05 | 2023-01-23 16:51:44 | 2023-01-23 17:10:40 | 0:18:56 | 0:07:07 | 0:11:49 | smithi | main | rhel | 8.6 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/crc wsync/no} objectstore-ec/bluestore-ec-root omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{with-quota} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/yes 5-workunit/suites/iogen}} | 3 | |
Failure Reason:
{'smithi169.front.sepia.ceph.com': {'_ansible_no_log': False, 'changed': True, 'cmd': ['apt-get', 'clean'], 'delta': '0:00:00.019042', 'end': '2023-01-23 17:05:06.866212', 'invocation': {'module_args': {'_raw_params': 'apt-get clean', '_uses_shell': False, 'argv': None, 'chdir': None, 'creates': None, 'executable': None, 'removes': None, 'stdin': None, 'stdin_add_newline': True, 'strip_empty_ends': True, 'warn': True}}, 'msg': 'non-zero return code', 'rc': 100, 'start': '2023-01-23 17:05:06.847170', 'stderr': 'E: Could not get lock /var/cache/apt/archives/lock. It is held by process 1919 (apt-get)\nE: Unable to lock directory /var/cache/apt/archives/', 'stderr_lines': ['E: Could not get lock /var/cache/apt/archives/lock. It is held by process 1919 (apt-get)', 'E: Unable to lock directory /var/cache/apt/archives/'], 'stdout': '', 'stdout_lines': [], 'warnings': ["Consider using the apt module rather than running 'apt-get'. If you need to use command because apt is insufficient you can add 'warn: false' to this command task or set 'command_warnings=False' in ansible.cfg to get rid of this message."]}} |
||||||||||||||
dead | 7133748 | 2023-01-23 16:51:06 | 2023-01-23 16:51:44 | 2023-01-23 17:14:40 | 0:22:56 | smithi | main | ubuntu | 20.04 | fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse msgr-failures/osd-mds-delay objectstore-ec/bluestore-ec-root overrides/{frag ignorelist_health ignorelist_wrongly_marked_down prefetch_dirfrags/no prefetch_entire_dirfrags/no races session_timeout thrashosds-health} ranks/5 tasks/{1-thrash/mon 2-workunit/fs/snaps}} | 2 | |||
fail | 7133749 | 2023-01-23 16:51:07 | 2023-01-23 16:51:44 | 2023-01-23 17:30:13 | 0:38:29 | 0:27:53 | 0:10:36 | smithi | main | rhel | 8.6 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/secure wsync/no} objectstore-ec/bluestore-comp-ec-root omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/5 replication/always} standby-replay tasks/{0-subvolume/{with-quota} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/yes 5-workunit/suites/pjd}} | 3 | |
Failure Reason:
Command failed (workunit test suites/pjd.sh) on smithi066 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=510284b66513490445619d1430aa869868c71a09 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/pjd.sh' |
||||||||||||||
pass | 7133750 | 2023-01-23 16:51:08 | 2023-01-23 16:51:45 | 2023-01-23 17:42:06 | 0:50:21 | 0:37:20 | 0:13:01 | smithi | main | rhel | 8.6 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-comp omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/5 replication/always} standby-replay tasks/{0-subvolume/{with-namespace-isolated} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/no 5-workunit/suites/blogbench}} | 3 |