User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
yuriw | 2021-01-27 15:44:33 | 2021-01-27 16:47:58 | 2021-01-28 05:34:54 | 12:46:56 | rados | wip-yuri7-testing-2021-01-26-0840-pacific | smithi | 6ae6c34 | 6 | 18 | 1 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 5833675 | 2021-01-27 15:45:49 | 2021-01-27 16:47:58 | 2021-01-27 17:01:57 | 0:13:59 | 0:04:16 | 0:09:43 | smithi | master | ubuntu | 18.04 | rados/cephadm/smoke-roleless/{distro/ubuntu_18.04_podman start} | 2 | |
Failure Reason:
Command failed on smithi196 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:6ae6c340188bb4cda209cbc795db104d877b4516 -v bootstrap --fsid 26316746-60c1-11eb-8f9a-001a4aab830c --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-ip 172.21.15.196 && sudo chmod +r /etc/ceph/ceph.client.admin.keyring' |
||||||||||||||
fail | 5833676 | 2021-01-27 15:45:50 | 2021-01-27 16:48:01 | 2021-01-27 17:16:00 | 0:27:59 | 0:17:13 | 0:10:46 | smithi | master | centos | 8.0 | rados/cephadm/upgrade/{1-start 2-repo_digest/defaut 3-start-upgrade 4-wait distro$/{centos_8.0} fixed-2 mon_election/connectivity} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 5833677 | 2021-01-27 15:45:51 | 2021-01-27 16:49:25 | 2021-01-27 17:19:24 | 0:29:59 | 0:18:53 | 0:11:06 | smithi | master | centos | 8.2 | rados/cephadm/workunits/{distro/centos_latest mon_election/classic task/test_orch_cli} | 1 | |
fail | 5833678 | 2021-01-27 15:45:52 | 2021-01-27 16:51:27 | 2021-01-27 17:07:27 | 0:16:00 | 0:04:50 | 0:11:10 | smithi | master | ubuntu | 18.04 | rados/cephadm/smoke/{distro/ubuntu_18.04_podman fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
Command failed on smithi204 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:6ae6c340188bb4cda209cbc795db104d877b4516 -v bootstrap --fsid b5e077b0-60c1-11eb-8f9a-001a4aab830c --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id y --orphan-initial-daemons --skip-monitoring-stack --mon-ip 172.21.15.204 && sudo chmod +r /etc/ceph/ceph.client.admin.keyring' |
||||||||||||||
pass | 5833679 | 2021-01-27 15:45:53 | 2021-01-27 16:51:51 | 2021-01-27 18:07:51 | 1:16:00 | 1:01:16 | 0:14:44 | smithi | master | centos | 8.2 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-recovery} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8} thrashers/morepggrow thrashosds-health workloads/radosbench-high-concurrency} | 2 | |
fail | 5833680 | 2021-01-27 15:45:53 | 2021-01-27 16:57:23 | 2021-01-27 18:35:25 | 1:38:02 | 1:24:43 | 0:13:19 | smithi | master | centos | 8.2 | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-bitmap rados tasks/rados_api_tests validater/valgrind} | 2 | |
Failure Reason:
Command failed (workunit test rados/test.sh) on smithi044 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=6ae6c340188bb4cda209cbc795db104d877b4516 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 ALLOW_TIMEOUTS=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh' |
||||||||||||||
fail | 5833681 | 2021-01-27 15:45:54 | 2021-01-27 16:57:45 | 2021-01-27 17:19:44 | 0:21:59 | 0:07:17 | 0:14:42 | smithi | master | ubuntu | 18.04 | rados/cephadm/workunits/{distro/ubuntu_18.04_podman mon_election/connectivity task/test_cephadm} | 1 | |
Failure Reason:
Command failed (workunit test cephadm/test_cephadm.sh) on smithi065 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=6ae6c340188bb4cda209cbc795db104d877b4516 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh' |
||||||||||||||
fail | 5833682 | 2021-01-27 15:45:55 | 2021-01-27 17:02:07 | 2021-01-27 17:52:08 | 0:50:01 | 0:36:20 | 0:13:41 | smithi | master | centos | 8.2 | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/fastclose objectstore/bluestore-comp-snappy rados recovery-overrides/{default} supported-random-distro$/{centos_8} thrashers/pggrow thrashosds-health workloads/ec-small-objects-many-deletes} | 2 | |
Failure Reason:
Command crashed: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --no-omap --ec-pool --max-ops 400000 --objects 20 --max-in-flight 8 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 600 --op read 0 --op write 0 --op delete 20 --op append 5 --op write_excl 0 --op append_excl 5 --pool unique_pool_0' |
||||||||||||||
fail | 5833683 | 2021-01-27 15:45:56 | 2021-01-27 17:03:48 | 2021-01-27 17:35:47 | 0:31:59 | 0:17:49 | 0:14:10 | smithi | master | centos | 8.0 | rados/cephadm/upgrade/{1-start 2-repo_digest/repo_digest 3-start-upgrade 4-wait distro$/{centos_8.0} fixed-2 mon_election/connectivity} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
fail | 5833684 | 2021-01-27 15:45:57 | 2021-01-27 17:07:53 | 2021-01-27 17:23:52 | 0:15:59 | 0:05:02 | 0:10:57 | smithi | master | ubuntu | 18.04 | rados/cephadm/smoke/{distro/ubuntu_18.04_podman fixed-2 mon_election/connectivity start} | 2 | |
Failure Reason:
Command failed on smithi151 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:6ae6c340188bb4cda209cbc795db104d877b4516 -v bootstrap --fsid 212ea710-60c4-11eb-8f9a-001a4aab830c --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id y --orphan-initial-daemons --skip-monitoring-stack --mon-ip 172.21.15.151 && sudo chmod +r /etc/ceph/ceph.client.admin.keyring' |
||||||||||||||
fail | 5833685 | 2021-01-27 15:45:57 | 2021-01-27 17:07:53 | 2021-01-27 17:25:52 | 0:17:59 | 0:07:03 | 0:10:56 | smithi | master | ubuntu | 18.04 | rados/cephadm/with-work/{distro/ubuntu_18.04_podman fixed-2 mode/root mon_election/connectivity msgr/async-v1only start tasks/rados_python} | 2 | |
Failure Reason:
Command failed on smithi191 with status 1: "sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:6ae6c340188bb4cda209cbc795db104d877b4516 -v bootstrap --fsid 53c0e170-60c4-11eb-8f9a-001a4aab830c --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id y --orphan-initial-daemons --skip-monitoring-stack --mon-addrv '[v1:172.21.15.191:6789]' && sudo chmod +r /etc/ceph/ceph.client.admin.keyring" |
||||||||||||||
fail | 5833686 | 2021-01-27 15:45:58 | 2021-01-27 17:09:54 | 2021-01-27 17:35:52 | 0:25:58 | 0:17:05 | 0:08:53 | smithi | master | centos | 8.2 | rados/cephadm/upgrade/{1-start 2-repo_digest/repo_digest 3-start-upgrade 4-wait distro$/{centos_latest} fixed-2 mon_election/classic} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
fail | 5833687 | 2021-01-27 15:45:59 | 2021-01-27 17:09:54 | 2021-01-27 17:29:52 | 0:19:58 | 0:05:25 | 0:14:33 | smithi | master | ubuntu | 18.04 | rados/cephadm/workunits/{distro/ubuntu_18.04_podman mon_election/connectivity task/test_orch_cli} | 1 | |
Failure Reason:
Command failed on smithi133 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:6ae6c340188bb4cda209cbc795db104d877b4516 -v bootstrap --fsid f8a9f2e4-60c4-11eb-8f9a-001a4aab830c --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id a --orphan-initial-daemons --skip-monitoring-stack --mon-ip 172.21.15.133 && sudo chmod +r /etc/ceph/ceph.client.admin.keyring' |
||||||||||||||
fail | 5833688 | 2021-01-27 15:46:00 | 2021-01-27 17:09:54 | 2021-01-27 17:25:52 | 0:15:58 | 0:04:16 | 0:11:42 | smithi | master | ubuntu | 18.04 | rados/cephadm/smoke-roleless/{distro/ubuntu_18.04_podman start} | 2 | |
Failure Reason:
Command failed on smithi035 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:6ae6c340188bb4cda209cbc795db104d877b4516 -v bootstrap --fsid 7ab7aae8-60c4-11eb-8f9a-001a4aab830c --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-ip 172.21.15.35 && sudo chmod +r /etc/ceph/ceph.client.admin.keyring' |
||||||||||||||
fail | 5833689 | 2021-01-27 15:46:00 | 2021-01-27 17:14:14 | 2021-01-27 17:36:12 | 0:21:58 | 0:04:55 | 0:17:03 | smithi | master | ubuntu | 18.04 | rados/cephadm/smoke/{distro/ubuntu_18.04_podman fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
Command failed on smithi065 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:6ae6c340188bb4cda209cbc795db104d877b4516 -v bootstrap --fsid c1aa553a-60c5-11eb-8f9a-001a4aab830c --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id y --orphan-initial-daemons --skip-monitoring-stack --mon-ip 172.21.15.65 && sudo chmod +r /etc/ceph/ceph.client.admin.keyring' |
||||||||||||||
fail | 5833690 | 2021-01-27 15:46:01 | 2021-01-27 17:16:17 | 2021-01-27 17:32:15 | 0:15:58 | 0:07:21 | 0:08:37 | smithi | master | ubuntu | 18.04 | rados/cephadm/workunits/{distro/ubuntu_18.04_podman mon_election/classic task/test_cephadm} | 1 | |
Failure Reason:
Command failed (workunit test cephadm/test_cephadm.sh) on smithi059 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=6ae6c340188bb4cda209cbc795db104d877b4516 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh' |
||||||||||||||
pass | 5833691 | 2021-01-27 15:46:02 | 2021-01-27 17:19:50 | 2021-01-27 17:49:48 | 0:29:58 | 0:18:03 | 0:11:55 | smithi | master | centos | 8.2 | rados/singleton/{all/test_envlibrados_for_rocksdb mon_election/connectivity msgr-failures/many msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{centos_8}} | 1 | |
pass | 5833692 | 2021-01-27 15:46:03 | 2021-01-27 17:19:50 | 2021-01-27 18:23:49 | 1:03:59 | 0:20:27 | 0:43:32 | smithi | master | ubuntu | 18.04 | rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus-v1only backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/osd-delay rados thrashers/morepggrow thrashosds-health workloads/test_rbd_api} | 3 | |
fail | 5833693 | 2021-01-27 15:46:04 | 2021-01-27 17:24:18 | 2021-01-27 17:42:17 | 0:17:59 | 0:07:27 | 0:10:32 | smithi | master | ubuntu | 18.04 | rados/cephadm/with-work/{distro/ubuntu_18.04_podman fixed-2 mode/root mon_election/connectivity msgr/async start tasks/rados_api_tests} | 2 | |
Failure Reason:
Command failed on smithi035 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:6ae6c340188bb4cda209cbc795db104d877b4516 -v bootstrap --fsid c9a4dfb6-60c6-11eb-8f9b-001a4aab830c --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id y --orphan-initial-daemons --skip-monitoring-stack --mon-ip 172.21.15.35 && sudo chmod +r /etc/ceph/ceph.client.admin.keyring' |
||||||||||||||
pass | 5833694 | 2021-01-27 15:46:04 | 2021-01-27 17:26:17 | 2021-01-27 18:48:17 | 1:22:00 | 1:02:10 | 0:19:50 | smithi | master | centos | 8.2 | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-low-osd-mem-target rados tasks/rados_api_tests validater/valgrind} | 2 | |
pass | 5833695 | 2021-01-27 15:46:05 | 2021-01-27 17:26:17 | 2021-01-27 18:10:16 | 0:43:59 | 0:23:59 | 0:20:00 | smithi | master | ubuntu | 18.04 | rados/singleton/{all/thrash_cache_writeback_proxy_none mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_latest}} | 2 | |
fail | 5833696 | 2021-01-27 15:46:06 | 2021-01-27 17:27:46 | 2021-01-27 17:43:45 | 0:15:59 | 0:05:19 | 0:10:40 | smithi | master | ubuntu | 18.04 | rados/cephadm/workunits/{distro/ubuntu_18.04_podman mon_election/classic task/test_orch_cli} | 1 | |
Failure Reason:
Command failed on smithi133 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:6ae6c340188bb4cda209cbc795db104d877b4516 -v bootstrap --fsid dd170cae-60c6-11eb-8f9b-001a4aab830c --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id a --orphan-initial-daemons --skip-monitoring-stack --mon-ip 172.21.15.133 && sudo chmod +r /etc/ceph/ceph.client.admin.keyring' |
||||||||||||||
fail | 5833697 | 2021-01-27 15:46:07 | 2021-01-27 17:30:18 | 2021-01-27 17:56:17 | 0:25:59 | 0:05:08 | 0:20:51 | smithi | master | ubuntu | 18.04 | rados/cephadm/smoke/{distro/ubuntu_18.04_podman fixed-2 mon_election/connectivity start} | 2 | |
Failure Reason:
Command failed on smithi133 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:6ae6c340188bb4cda209cbc795db104d877b4516 -v bootstrap --fsid c367f604-60c8-11eb-8f9b-001a4aab830c --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id y --orphan-initial-daemons --skip-monitoring-stack --mon-ip 172.21.15.133 && sudo chmod +r /etc/ceph/ceph.client.admin.keyring' |
||||||||||||||
dead | 5833698 | 2021-01-27 15:46:08 | 2021-01-27 17:32:19 | 2021-01-28 05:34:54 | 12:02:35 | smithi | master | rhel | 8.3 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-delay msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{rhel_8} thrashers/mapgap thrashosds-health workloads/radosbench-high-concurrency} | 2 | |||
fail | 5833699 | 2021-01-27 15:46:08 | 2021-01-27 17:36:14 | 2021-01-27 17:54:12 | 0:17:58 | 0:05:59 | 0:11:59 | smithi | master | ubuntu | 18.04 | rados/cephadm/workunits/{distro/ubuntu_18.04_podman mon_election/classic task/test_adoption} | 1 | |
Failure Reason:
Found coredumps on ubuntu@smithi172.front.sepia.ceph.com |