Giter VIP home page Giter VIP logo

burrito's Issues

neutron-ovs-agent pod readiness probe failed

Installing and testing burrito testbed env.

When openstack's instance (VM) shutdown, the status of neutron-ovs-agent pod becomes 0/1(notready).

1.) shutdown instance

root@btx-0:/# openstack server list --all
+--------------------------------------+------+--------+---------------------------------------+--------+---------+
| ID                                   | Name | Status | Networks                              | Image  | Flavor  |
+--------------------------------------+------+--------+---------------------------------------+--------+---------+
| 585c39c9-c935-442e-880f-10f193fdcd56 | test | SHUTOFF | private-net=10.10.40.85, 172.30.1.140 | cirros | m1.tiny |
+--------------------------------------+------+--------+---------------------------------------+--------+---------+

2.) neutron ovs agent pod status

[clex@control01 ~]$ kubectl get po -n openstack -owide|grep ovs-agent
...
neutron-ovs-agent-default-hcwvm                0/1     Running            0                 26h     192.168.20.102   compute02   <none>           <none>
...
Events:
  Type     Reason     Age                   From     Message
  ----     ------     ----                  ----     -------
  Warning  Unhealthy  105s (x513 over 26h)  kubelet  Readiness probe failed:

It seems to be caused by neutron-ovs-agent not being able to remove the virtual tap interface when vm shutdown.

neutron@compute02:/$ cat /tmp/neutron-openvswitch-agent-readiness.sh
#!/bin/bash
...
[ -z "$(/usr/bin/ovs-vsctl show | grep error:)" ]

neutron@compute02:/$ /usr/bin/ovs-vsctl show | grep error:
                error: "could not open network device tap9b640a49-79 (No such device)"

Is it "openstack(yoga)'s problem?" or "kubernetes readiness.sh script's problem?"

/etc/hosts issue when adding new nodes

When you add a new computer node, /etc/hosts does not generate an address for cinder.

new compute node

$ cat /etc/hosts
...
192.168.xx.xx control01.cluster.local control01

In this case, VM creation fails due to lack of connection refuse with cinder.

$ curl -v http://cinder.openstack.svc.cluster.local:8080
...

  • Failed to connect to cinder.openstack.svc.cluster.local port 8080: Connection refused
  • Closing connection 0
    curl: (7) Failed to connect to cinder.openstack.svc.cluster.local port 8080: Connection refused

If you add the following information to /etc/hosts and restart the pod, it will be connected to the cinder and operate normally.

$ sudo vi /etc/hosts
...
192.168.20.11 control01.cluster.local control01 cinder.openstack.svc.cluster.local

$ curl -v http://cinder.openstack.svc.cluster.local:8080
...

  • Connection #0 to host cinder.openstack.svc.cluster.local left intact

Please check the above issue.

Burrito images

Dear Sirs,

Are the Burrito images and/or the file: 'burrito_os_images.tar' publicly available?
Do you intent to make them publicly available?
Thank you in advance.

Best Regards,

Vasilios Pasias

Add to BGP speaker dragent schedulring driver

When a neutron pod node dies, there is a phenomenon in which the BGP Spaker dragent disappears.
To resolve this issue, you must add bgp_drscheduler_driver to neutron.conf.
Please refer to the document below.
openstack docs : (https://docs.openstack.org/mitaka/config-reference/networking/networking_options_reference.html)

setting

root@btx-0:/# openstack bgp speaker show dragents bgpspeaker
The 'openstack bgp speaker show dragents' CLI is deprecated and will be removed in the future. Use 'openstack bgp dragent list' CLI instead.
+--------------------------------------+-----------+-------+-------+
| ID | Host | State | Alive |
+--------------------------------------+-----------+-------+-------+
| 3c9faf8c-c6a3-4f27-8be1-0118ab4b3050 | control03 | True | :-) |
| 66363166-b5f7-474d-ac88-940fe7c61704 | control01 | True | :-) |
| 6cb90f1a-2bd3-4b33-9572-41db7088f5bc | control02 | True | :-) |
+--------------------------------------+-----------+-------+-------+

root@btx-0:/# openstack bgp dragent list
+--------------------------------------+-----------+-------+-------+
| ID | Host | State | Alive |
+--------------------------------------+-----------+-------+-------+
| 66363166-b5f7-474d-ac88-940fe7c61704 | control01 | True | :-) |
| 6cb90f1a-2bd3-4b33-9572-41db7088f5bc | control02 | True | :-) |
| 3c9faf8c-c6a3-4f27-8be1-0118ab4b3050 | control03 | True | :-) |
+--------------------------------------+-----------+-------+-------+

node down

root@btx-0:/# openstack bgp speaker show dragents bgpspeaker
The 'openstack bgp speaker show dragents' CLI is deprecated and will be removed in the future. Use 'openstack bgp dragent list' CLI instead.

<title>504 Gateway Time-out</title>

504 Gateway Time-out


nginx

root@btx-0:/# openstack bgp dragent list
+--------------------------------------+-----------+-------+-------+
| ID | Host | State | Alive |
+--------------------------------------+-----------+-------+-------+
| 66363166-b5f7-474d-ac88-940fe7c61704 | control01 | True | :-) |
| 6cb90f1a-2bd3-4b33-9572-41db7088f5bc | control02 | True | XXX |
| 3c9faf8c-c6a3-4f27-8be1-0118ab4b3050 | control03 | True | :-) |
+--------------------------------------+-----------+-------+-------+

node up

root@btx-0:/# openstack bgp speaker show dragents bgpspeaker
The 'openstack bgp speaker show dragents' CLI is deprecated and will be removed in the future. Use 'openstack bgp dragent list' CLI instead.
+--------------------------------------+-----------+-------+-------+
| ID | Host | State | Alive |
+--------------------------------------+-----------+-------+-------+
| 66363166-b5f7-474d-ac88-940fe7c61704 | control01 | True | :-) |
+--------------------------------------+-----------+-------+-------+

root@btx-0:/# openstack bgp dragent list
+--------------------------------------+-----------+-------+-------+
| ID | Host | State | Alive |
+--------------------------------------+-----------+-------+-------+
| 66363166-b5f7-474d-ac88-940fe7c61704 | control01 | True | :-) |
| 6cb90f1a-2bd3-4b33-9572-41db7088f5bc | control02 | True | :-) |
| 3c9faf8c-c6a3-4f27-8be1-0118ab4b3050 | control03 | True | :-) |
+--------------------------------------+-----------+-------+-------+

The previously scheduled driver was found to have disappeared when the node was up.
To solve this problem, i used the driver below.

$ vi kubespray/roles/burrito.openstack/templates/osh/neutron.yml.j2
...
neutron:
DEFAULT:
bind_host: 127.0.0.1
debug: True
router_distributed: True
core_plugin: ml2
global_physnet_mtu: {{ openstack_mtu }}
service_plugins: neutron_dynamic_routing.services.bgp.bgp_plugin.BgpPlugin,neutron.services.l3_router.l3_router_plugin.L3RouterPlugin
#service_plugins: router
l3_ha_network_type: vxlan
dhcp_agents_per_network: 2

  •  bgp_drscheduler_driver: neutron_dynamic_routing.services.bgp.scheduler.bgp_dragent_scheduler.StaticScheduler
    

I tried node down and up after applying the driver.

node down

root@btx-0:/# openstack bgp speaker show dragents bgpspeaker
The 'openstack bgp speaker show dragents' CLI is deprecated and will be removed in the future. Use 'openstack bgp dragent list' CLI instead.
+--------------------------------------+-----------+-------+-------+
| ID | Host | State | Alive |
+--------------------------------------+-----------+-------+-------+
| 3c9faf8c-c6a3-4f27-8be1-0118ab4b3050 | control03 | True | :-) |
| 66363166-b5f7-474d-ac88-940fe7c61704 | control01 | True | :-) |
| 6cb90f1a-2bd3-4b33-9572-41db7088f5bc | control02 | True | XXX |
+--------------------------------------+-----------+-------+-------+

root@btx-0:/# openstack bgp dragent list
+--------------------------------------+-----------+-------+-------+
| ID | Host | State | Alive |
+--------------------------------------+-----------+-------+-------+
| 66363166-b5f7-474d-ac88-940fe7c61704 | control01 | True | :-) |
| 6cb90f1a-2bd3-4b33-9572-41db7088f5bc | control02 | True | XXX |
| 3c9faf8c-c6a3-4f27-8be1-0118ab4b3050 | control03 | True | :-) |
+--------------------------------------+-----------+-------+-------+

node up

root@btx-0:/# openstack bgp speaker show dragents bgpspeaker
The 'openstack bgp speaker show dragents' CLI is deprecated and will be removed in the future. Use 'openstack bgp dragent list' CLI instead.
+--------------------------------------+-----------+-------+-------+
| ID | Host | State | Alive |
+--------------------------------------+-----------+-------+-------+
| 3c9faf8c-c6a3-4f27-8be1-0118ab4b3050 | control03 | True | :-) |
| 66363166-b5f7-474d-ac88-940fe7c61704 | control01 | True | :-) |
| 6cb90f1a-2bd3-4b33-9572-41db7088f5bc | control02 | True | :-) |
+--------------------------------------+-----------+-------+-------+
root@btx-0:/# openstack bgp dragent list
+--------------------------------------+-----------+-------+-------+
| ID | Host | State | Alive |
+--------------------------------------+-----------+-------+-------+
| 66363166-b5f7-474d-ac88-940fe7c61704 | control01 | True | :-) |
| 6cb90f1a-2bd3-4b33-9572-41db7088f5bc | control02 | True | :-) |
| 3c9faf8c-c6a3-4f27-8be1-0118ab4b3050 | control03 | True | :-) |
+--------------------------------------+-----------+-------+-------+

I think the above problem has been solved, so please check if improvement of burrito.

Unknown null interface creation issue.

Describe the bug
Unknown null interface creation issue.

To Reproduce
Steps to reproduce the behavior:

  1. overlay_iface_name variable is null

Expected behavior
$ ip a s null
41: null: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
link/ether ee:d4:11:56:e0:7f brd ff:ff:ff:ff:ff:ff
inet6 fe80::ecd4:11ff:fe56:e07f/64 scope link
valid_lft forever preferred_lft forever

Screenshots

Versions (please complete the following information):

  • OS: Rocky Linux 8.7
  • Burrito 1.1.1

Additional context

ha.yml file typo problem

The hosts part of the ha.yml file needs to be modified

$ cat ha.yml

  • name: setup haproxy
    hosts: kube-master
    any_errors_fatal: true
    roles:

    • { role: burrito.haproxy, tags: ['haproxy', 'burrito']}
  • name: setup keepalived
    hosts: kube-master
    any_errors_fatal: true
    roles:

    • { role: burrito.keepalived, tags: ['keepalived', 'burrito']}
      ...

$ cat ha.yml

  • name: setup haproxy
    hosts: kube_control_plane
    any_errors_fatal: true
    roles:

    • { role: burrito.haproxy, tags: ['haproxy', 'burrito']}
  • name: setup keepalived
    hosts: kube_control_plane
    any_errors_fatal: true
    roles:

    • { role: burrito.keepalived, tags: ['keepalived', 'burrito']}

local repo and registry should be on control nodes.

Is your feature request related to a problem? Please describe.
The local repository and registry pods are running on compute nodes when it is deployed.
They are the services for the platform and containers. So I think it belongs to control plane.

Describe the solution you'd like
Add nodeSelector in podSpec to run them on control nodes.

nodeSelector:
  node-role.kubernetes.io/control-plane: ""

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.