Installing OpenStack Kilo (Red Hat OSP7) LBaaS with @NuageNetworks VSP 3.2R4 (HAProxy)

Hi there. We can find hundreds of posts regarding how to install OpenStack LBaaS. This case I’ll bring an step-by-step guide to implement LBaaS with Nuage VSP 3.2.R4 into OpenStack Kilo OSP7 (Red Hat). Kilo uses the LBaaS API v2.

I suggest you to get “VSP OpenStack Kilo Neutron Plugin User Guide (Release 3.2.R5 (Issue 2))”. Most of this post is based on this guide and the section “Using OpenStack LBaaS with the Nuage Neutron Plugin”.

I want to say thanks to Claire. She’s given me a HUGE support.

I’ve tested all these commands into our lab called Nuts. Hussein/Remi made a great job providing this amazing resource (Thanks guys). A tool that I’ve used with many customers to show how great is Nuage working with OpenStack. Check details about it below.

nuts lab description

 

Nuage Virtualized Services Directory (VSD) is the brain serving “as a policy, business logic and analytics engine “and could be 100% managed through Jason format APIs. Of course, it gives you a GUI that I’ll show shortly. VSC programs every network function as the Datacenter network control plane. More details in my previous post and also at Nuage.

A consolidated OpenStack Controller/Network node called os-controller with projects like Neutron, Keystone and Glance. Two Nova nodes with KVM and Nuage VRS (based on OpenVSwitch).

os-controller is already configured with Nuage plugin for neutron. /etc/neutron/neutron.conf file contains the line:
core_plugin = neutron.plugins.nuage.plugin.NuagePlugin

And /etc/neutron/plugin.ini should be like this:

 
default_net_partition_name = OpenStack_Nuage_Lab
server = 10.0.0.2:8443
serverauth = osadmin:osadmin

### Do not change the below options for standard installs
organization = csp
auth_resource = /me
serverssl = True
base_uri = /nuage/api/v3_2
cms_id = 540d931d-0585-4fce-8c3d-064fb7f357e0

installing plug-in on controller node

Let’s start installing python-neutron-lbaas on controller node:
[root@os-controller ~(kyst_adm)]# yum install python-neutron-lbaas

Update service providers section into /etc/neutron/neutron.conf (don’t use lbaas_agent.ini)


[ service_providers ]
service_provider=LOADBALANCERV2:Haproxy:neutron_lbaas.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default

Add service plugin for LBaaS API v2 under default section:


[DEFAULT]
service_plugins=neutron_lbaas.services.loadbalancer.plugin.LoadBalancerPluginv2

Restart neutron service:
[root@os-controller ~(kyst_adm)]# systemctl restart neutron-server.service

Let’s go now with the HAProxy and Neutron node Nuage Plugin installation

Installing HAProxy at network node

Install HAProxy it’s simple, just run: [root@os-controller ~(kyst_adm)]# yum install haproxy
However, TCP ports 80 and 8080 are being used by other processes in our lab (use netstat -anp to check that). Then I’ve change the port to 5000 and then I restarted the service. HAProxy file is the following (/etc/haproxy/haproxy.cfg):


global
    log         127.0.0.1 local2

    chroot      /var/lib/haproxy
    pidfile     /var/run/haproxy.pid
    maxconn     4000
    user        haproxy
    group       haproxy
    daemon

    # turn on stats unix socket
    stats socket /var/lib/haproxy/stats

defaults
    mode                    http
    log                     global
    option                  httplog
    option                  dontlognull
    option http-server-close
    option forwardfor       except 127.0.0.0/8
    option                  redispatch
    retries                 3
    timeout http-request    10s
    timeout queue           1m
    timeout connect         10s
    timeout client          1m
    timeout server          1m
    timeout http-keep-alive 10s
    timeout check           10s
    maxconn                 3000

frontend  main *:5000
    acl url_static       path_beg       -i /static /images /javascript /stylesheets
    acl url_static       path_end       -i .jpg .gif .png .css .js

    use_backend static          if url_static
    default_backend             app

backend static
    balance     roundrobin
    server      static 127.0.0.1:4331 check

backend app
    balance     roundrobin
    server  app1 127.0.0.1:5001 check
    server  app2 127.0.0.1:5002 check
    server  app3 127.0.0.1:5003 check
    server  app4 127.0.0.1:5004 check

Now, restart the service:systemctl restart haproxy.service

And check status of the service:



[root@os-controller etc]# service haproxy status
Redirecting to /bin/systemctl status  haproxy.service
● haproxy.service - HAProxy Load Balancer
   Loaded: loaded (/usr/lib/systemd/system/haproxy.service; disabled; vendor preset: disabled)
   Active: active (running) since Mon 2015-12-21 13:58:19 PST; 5s ago
 Main PID: 13746 (haproxy-systemd)
   CGroup: /system.slice/haproxy.service
           ├─13746 /usr/sbin/haproxy-systemd-wrapper -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid
           ├─13747 /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -Ds
           └─13748 /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -Ds

Dec 21 13:58:19 os-controller.novalocal systemd[1]: Started HAProxy Load Balancer.
Dec 21 13:58:19 os-controller.novalocal systemd[1]: Starting HAProxy Load Balancer...
Dec 21 13:58:19 os-controller.novalocal haproxy-systemd-wrapper[13746]: haproxy-systemd-wrapper: executing /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -p /run/...pid -Ds
Hint: Some lines were ellipsized, use -l to show in full.

It’s time to install LBaaS plugin on the network node

Installing LBaaSv2 Plugin at Network node

Ok, just a reminder that you are already into an OpenStack instance with Nuage plugin perfectly working. If, this is not the case, you will have to install the Nuage plugin for neutron before go further.

We need to install VRS into our network node. VRS will be in charge to manage communication between Computes node and the LBaaS.

Installing VRS service into Network node

This case we are going to follow instructions from “VSP Install Guide Release 3.2R4” in the section “VRS AND VRS-G SOFTWARE INSTALLATION ON REDHAT AND UBUNTU”. This is a Linux Red Hat v7, then, we’ll follow the guidelines for this linux distro and version.

You will need the Nuage-VRS-3.2.4-133-el7.tar.gz file for later. Connect to support.alcatel-lucent.com and get it.

Let’s enable EPEL repository as our first action:rpm -Uvh http://mirror.pnl.gov/epel/7/x86_64/e/epel-release-7-5.noarch.rpm

Now, let enable the following repo at /etc/yum.repos.d/redhat.repo into the following section:


[rhel-7-server-optional-rpms]
metadata_expire = 86400
sslclientcert = /etc/pki/entitlement/7395579051263769833.pem
baseurl = https://cdn.redhat.com/content/dist/rhel/server/7/$releasever/$basearch/optional/os
ui_repoid_vars = releasever basearch
sslverify = 1
name = Red Hat Enterprise Linux 7 Server - Optional (RPMs)
sslclientkey = /etc/pki/entitlement/7395579051263769833-key.pem
gpgkey = file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
enabled = 1
sslcacert = /etc/rhsm/ca/redhat-uep.pem
gpgcheck = 1

Now, we have to run yum update
It’s time to go for cup of coffee. It’s going to take some time.

Install the following dependencies:


yum install libvirt
yum install python-twisted-core
yum install perl-JSON
yum install qemu-kvm
yum install vconfig

Let’s install our VRS packages that we’ve just got.


tar zxvf Nuage-VRS-3.2.4-133-el7.tar.gz
yum localinstall nuage-openvswitch-3.2.4-133.el7.x86_64.rpm
yum localinstall nuage-openvswitch-dkms-3.2.4-133.el7.x86_64.rpm

Now, let add the personality to our /etc/default/openvswitch file and the IP of the controller. The file states like this:



PERSONALITY=vrs
UUID=
CPE_ID=
DATAPATH_ID=
UPLINK_ID=
NETWORK_UPLINK_INTF=
NETWORK_NAMESPACE=
PLATFORM="kvm"
DEFAULT_BRIDGE=alubr0
GW_HB_BRIDGE=
GW_HB_VLAN=4094
GW_HB_TIMEOUT=2000
MGMT_ETH=
UPLINK_ETH=
GW_PEER_DATAPATH_ID=
GW_ROLE="backup"
CONN_TYPE=tcp

ACTIVE_CONTROLLER=10.0.0.3
SKB_LRO_MOD_ENABLED=no
DEFAULT_LOG_LEVEL=

Now, we need to take care of selinux or our openvswitch will failed. you have to either disable selinux or set it to permissive. You can just use the cli here and setenforce 0 and change the file /etc/selinux/config just in case of any later reboot. Use the command getenforce to check if the status is “Permissive”

Let’s restart openvswitch doing systemctl restart openvswitch.service

Now, let’s check if the service is working properly:

[root@os-controller ~(kyst_adm)]# ovs-vsctl show
4af4f578-7fbf-407c-b04a-8f00336421b1
    Bridge "alubr0"
        Controller "ctrl1"
            target: "tcp:10.0.0.3:6633"
            role: master
            is_connected: true
        Port "alubr0"
            Interface "alubr0"
                type: internal
        Port "svc-rl-tap1"
            Interface "svc-rl-tap1"
        Port "svc-rl-tap2"
            Interface "svc-rl-tap2"
        Port svc-pat-tap
            Interface svc-pat-tap
                type: internal
    ovs_version: "3.2.4-133-nuage"

Now we are ready to resume our plugin installation

Back again to install LBaaS v2 plugin on network node

Let’s add the following line to our /etc/neutron/neutron.conf file under the default section:


[DEFAULT]
ovs_integration_bridge = alubr0

Then /etc/neutron/neutron.conf will be as the following:


[DEFAULT]
service_plugins=neutron_lbaas.services.loadbalancer.plugin.LoadBalancerPluginv2
ovs_integration_bridge = alubr0
verbose = True
router_distributed = False
debug = False
state_path = /var/lib/neutron
use_syslog = False
log_dir =/var/log/neutron
bind_host = 0.0.0.0
bind_port = 9696
core_plugin = neutron.plugins.nuage.plugin.NuagePlugin
auth_strategy = keystone
base_mac = fa:16:3e:00:00:00
mac_generation_retries = 16
dhcp_lease_duration = 86400
dhcp_agent_notification = True
allow_bulk = True
allow_pagination = False
allow_sorting = False
allow_overlapping_ips = True
agent_down_time = 75
router_scheduler_driver = neutron.scheduler.l3_agent_scheduler.ChanceScheduler
allow_automatic_l3agent_failover = False
dhcp_agents_per_network = 1
l3_ha = False
api_workers = 4
rpc_workers = 4
use_ssl = False
notify_nova_on_port_status_changes = True
notify_nova_on_port_data_changes = True
nova_url = http://10.0.0.10:8774/v2
nova_region_name =RegionOne
nova_admin_username =nova
nova_admin_tenant_id =f33c6e3b0519478ab6e55fef9a1a3d1c
nova_admin_password =56415bf8a5444bb6
nova_admin_auth_url =http://10.0.0.10:5000/v2.0
send_events_interval = 2
rpc_backend=neutron.openstack.common.rpc.impl_kombu
control_exchange=neutron
lock_path=/var/lib/neutron/lock


[matchmaker_redis]

[matchmaker_ring]

[quotas]

[agent]
root_helper = sudo neutron-rootwrap /etc/neutron/rootwrap.conf
report_interval = 30

[keystone_authtoken]
auth_uri = http://10.0.0.10:5000/v2.0
identity_uri = http://10.0.0.10:35357
admin_tenant_name = services
admin_user = neutron
admin_password = 3045b48a69f340b0

[database]
connection = mysql://neutron:92ed70427a014077@10.0.0.10/neutron
max_retries = 10
retry_interval = 10
min_pool_size = 1
max_pool_size = 10
idle_timeout = 3600
max_overflow = 20

[nova]

[oslo_concurrency]

[oslo_policy]

[oslo_messaging_amqp]

[oslo_messaging_qpid]

[oslo_messaging_rabbit]

kombu_reconnect_delay = 1.0
rabbit_host = 10.0.0.10
rabbit_port = 5672
rabbit_hosts = 10.0.0.10:5672
rabbit_use_ssl = False
rabbit_userid = guest
rabbit_password = guest
rabbit_virtual_host = /
rabbit_ha_queues = False

[service_providers]
service_provider=LOADBALANCERV2:Haproxy:neutron_lbaas.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default


Let’s configure the /etc/neutron/lbaas_agent.ini file as the following:


[DEFAULT]
ovs_use_veth=False
interface_driver=nuage_neutron.lbaas.agent.nuage_interface.NuageInterfaceDriver

[haproxy]

Finally, let’s start or LBaaS agent doing systemctl start neutron-lbaasv2-agent and we are done

We can start adding new load balancers at any moment

Playing with LBaaS

Sadly horizon doesn’t support all panels for LBaaSv2, then you will have to use Neutron APIs instead (please, don’t blame Nuage or me for that). Liberty is solving this out anyway (I didn’t test it yet). I suggest you to start through command-line as I’m showing in the following lines:


[root@os-controller ~(kyst_adm)]# neutron net-list
+--------------------------------------+------------------+------------------------------------------------------+
| id                                   | name             | subnets                                              |
+--------------------------------------+------------------+------------------------------------------------------+
| 24b003ec-d666-4814-9c55-5cb14d65a065 | adm.priv2        | f5944244-4e12-4c8a-a748-0326e8a015e8 192.168.52.0/24 |
| 2f61f543-214f-462f-afb7-182ec816abe9 | external_network | 8f73aa92-e8af-454b-bffe-55c72257453b 10.0.1.0/24     |
| 562972a3-3403-49a3-87aa-d2c9a714a0fd | adm.priv4        | c317c461-7da7-45b9-b1f0-ce45f0acfafa 192.168.54.0/24 |
| 7080b26f-e556-4207-8c5a-e403865dcc30 | adm.priv1        | f3355820-69bc-40c6-bfe2-e6c07df24d30 192.168.51.0/24 |
| b1a4897a-d6e8-4a0f-ae13-41a6bc40cea5 | private          | 45916c43-0f29-48bf-9fdd-332a2c99be5f 172.16.1.0/24   |
| b3631409-eace-4ae1-81b4-499fb0ce3104 | adm.priv3        | a7304423-2193-4f0c-8e95-9868cc329698 192.168.53.0/24 |
| eb0b7fc6-efd7-469d-9b6d-e0188719f5b1 | t-system01       | ff71594b-1e4e-4fdb-ac79-e71cf444bac2 169.87.23.0/24  |
+--------------------------------------+------------------+------------------------------------------------------+

[root@os-controller ~(kyst_adm)]# neutron lbaas-loadbalancer-create --name lb3 45916c43-0f29-48bf-9fdd-332a2c99be5f
Created a new loadbalancer:
+---------------------+--------------------------------------+
| Field               | Value                                |
+---------------------+--------------------------------------+
| admin_state_up      | True                                 |
| description         |                                      |
| id                  | c5e548b9-6936-435d-b468-0aa4b9fcd08a |
| listeners           |                                      |
| name                | lb3                                  |
| operating_status    | ONLINE                               |
| provider            | haproxy                              |
| provisioning_status | ACTIVE                               |
| tenant_id           | 63d41744393243b6a51a95c6063fe4c1     |
| vip_address         | 172.16.1.4                           |
| vip_port_id         | b13807f4-371d-4df4-9e70-6c4db70e6f49 |
| vip_subnet_id       | 45916c43-0f29-48bf-9fdd-332a2c99be5f |
+---------------------+--------------------------------------+

If you want to see these load balancers properly from the nuage console. You will have to create a listener as the the following:


[root@os-controller ~(kyst_adm)]# neutron lbaas-loadbalancer-list
+--------------------------------------+------+---------------+---------------------+----------+
| id                                   | name | vip_address   | provisioning_status | provider |
+--------------------------------------+------+---------------+---------------------+----------+
| a986bead-2fe5-4f53-a607-0c197565a1b3 | lb1  | 192.168.51.14 | ACTIVE              | haproxy  |
| b1bd8993-acc7-484d-ba93-b5ce185510b4 | lb0  | 192.168.51.13 | ACTIVE              | haproxy  |
| c5e548b9-6936-435d-b468-0aa4b9fcd08a | lb3  | 172.16.1.4    | ACTIVE              | haproxy  |
+--------------------------------------+------+---------------+---------------------+----------+
[root@os-controller ~(kyst_adm)]# neutron lbaas-listener-create --loadbalancer lb3 --protocol HTTP --protocol-port 80 --name listernerlb3
Created a new listener:
+--------------------------+------------------------------------------------+
| Field                    | Value                                          |
+--------------------------+------------------------------------------------+
| admin_state_up           | True                                           |
| connection_limit         | -1                                             |
| default_pool_id          |                                                |
| default_tls_container_id |                                                |
| description              |                                                |
| id                       | d0fb168b-008b-44b8-9bbc-b59d4ada021e           |
| loadbalancers            | {"id": "c5e548b9-6936-435d-b468-0aa4b9fcd08a"} |
| name                     | listernerlb3                                   |
| protocol                 | HTTP                                           |
| protocol_port            | 80                                             |
| sni_container_ids        |                                                |
| tenant_id                | 63d41744393243b6a51a95c6063fe4c1               |
+--------------------------+------------------------------------------------+
[root@os-controller ~(kyst_adm)]# neutron lbaas-listener-list
+--------------------------------------+-----------------+--------------+----------+---------------+----------------+
| id                                   | default_pool_id | name         | protocol | protocol_port | admin_state_up |
+--------------------------------------+-----------------+--------------+----------+---------------+----------------+
| d0fb168b-008b-44b8-9bbc-b59d4ada021e |                 | listernerlb3 | HTTP     |            80 | True           |
| b5c02849-a247-48ad-909d-cccbcbe4b367 |                 | listernerlb0 | HTTP     |            80 | True           |
| 0c061dcf-006f-4283-a88c-c14ce2f0096a |                 | listernerlb1 | HTTP     |            80 | True           |
+--------------------------------------+-----------------+--------------+----------+---------------+----------------+

This will come up into the VSD console as the next picture:

nuage networks sdn plugin lbaas haproxy kilo red hat openstack neutron

Check the namespaces that you’ve just created:


[root@os-controller ~(kyst_adm)]# ip netns list
qlbaas-c5e548b9-6936-435d-b468-0aa4b9fcd08a
qlbaas-a986bead-2fe5-4f53-a607-0c197565a1b3
qlbaas-b1bd8993-acc7-484d-ba93-b5ce185510b4

Let’s create now a pool


[root@os-controller ~(kyst_adm)]# neutron lbaas-pool-create --lb-algorithm ROUND_ROBIN --listener listernerlb3 --protocol HTTP --name pool1
Created a new pool:
+---------------------+------------------------------------------------+
| Field               | Value                                          |
+---------------------+------------------------------------------------+
| admin_state_up      | True                                           |
| description         |                                                |
| healthmonitor_id    |                                                |
| id                  | 4fc8f356-01bf-4aa2-8fcb-afa5b49d8ef3           |
| lb_algorithm        | ROUND_ROBIN                                    |
| listeners           | {"id": "d0fb168b-008b-44b8-9bbc-b59d4ada021e"} |
| members             |                                                |
| name                | pool1                                          |
| protocol            | HTTP                                           |
| session_persistence |                                                |
| tenant_id           | 63d41744393243b6a51a95c6063fe4c1               |
+---------------------+------------------------------------------------+

Now I will add a couple of servers from different subnets (why not?)


[root@os-controller ~(kyst_adm)]# nova list
+--------------------------------------+--------------------+--------+------------+-------------+----------------------------------+
| ID                                   | Name               | Status | Task State | Power State | Networks                         |
+--------------------------------------+--------------------+--------+------------+-------------+----------------------------------+
| 7fc236ab-f43e-418e-b44a-f40da53a8256 | adm.priv1.inst_fip | ACTIVE | -          | Running     | adm.priv1=192.168.51.2, 10.0.1.7 |
| ff4c0705-73bc-467b-bbbf-f16a6795a53a | adm.priv2.inst_fip | ACTIVE | -          | Running     | adm.priv2=192.168.52.2, 10.0.1.5 |
| aa189578-28c6-4e97-bf4f-a432cd62c0a9 | adm.priv3.inst_fip | ACTIVE | -          | Running     | adm.priv3=192.168.53.2, 10.0.1.8 |
| 598e3ce8-aea1-4d74-aa88-6a94a7cb668d | adm.priv4.inst_fip | ACTIVE | -          | Running     | adm.priv4=192.168.54.2, 10.0.1.6 |
| 642dd34b-ddc5-4c38-a3bd-9697ee9ca81f | test01             | ACTIVE | -          | Running     | private=172.16.1.3, 10.0.1.4     |
| eb4602cd-8614-4ccb-96d2-23dbc2bde2d7 | tsystems01         | ACTIVE | -          | Running     | t-system01=169.87.23.2           |
+--------------------------------------+--------------------+--------+------------+-------------+----------------------------------+
[root@os-controller ~(kyst_adm)]# neutron lbaas-member-create --subnet adm.priv1 --address 192.168.51.2 --protocol-port 80 pool1
Created a new member:
+----------------+--------------------------------------+
| Field          | Value                                |
+----------------+--------------------------------------+
| address        | 192.168.51.2                         |
| admin_state_up | True                                 |
| id             | 0f172e78-02f3-4046-8b16-9670b4d3bbb4 |
| protocol_port  | 80                                   |
| subnet_id      | f3355820-69bc-40c6-bfe2-e6c07df24d30 |
| tenant_id      | 63d41744393243b6a51a95c6063fe4c1     |
| weight         | 1                                    |
+----------------+--------------------------------------+
[root@os-controller ~(kyst_adm)]# neutron lbaas-member-create --subnet adm.priv2 --address 192.168.52.2 --protocol-port 80 pool1
Created a new member:
+----------------+--------------------------------------+
| Field          | Value                                |
+----------------+--------------------------------------+
| address        | 192.168.52.2                         |
| admin_state_up | True                                 |
| id             | 6f124fce-f44e-45d0-b49e-69cddb93f894 |
| protocol_port  | 80                                   |
| subnet_id      | f5944244-4e12-4c8a-a748-0326e8a015e8 |
| tenant_id      | 63d41744393243b6a51a95c6063fe4c1     |
| weight         | 1                                    |
+----------------+--------------------------------------+

Let’s try if our load balancer is working out. I will create a index.html file with the content “I am into server ONE!” and a HTTP server into the pool’s member 192.168.51.2. Now I’ll try to access this from the load balancer 172.16.1.4.


[centos@adm ~]$ ifconfig -a
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.51.2  netmask 255.255.255.0  broadcast 192.168.51.255
        inet6 fe80::f816:3eff:fe6b:db0b  prefixlen 64  scopeid 0x20
        ether fa:16:3e:6b:db:0b  txqueuelen 1000  (Ethernet)
        RX packets 10442  bytes 7627092 (7.2 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 7828  bytes 657499 (642.0 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10
        loop  txqueuelen 0  (Local Loopback)
        RX packets 21498  bytes 1868738 (1.7 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 21498  bytes 1868738 (1.7 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[centos@adm ~]$ cat index.html 
I am into server ONE!
[centos@adm ~]$ sudo python -m SimpleHTTPServer 80 &
[1] 21508
[centos@adm ~]$ Serving HTTP on 0.0.0.0 port 80 ...

[centos@adm ~]$ telnet 172.16.1.4 80
Trying 172.16.1.4...
Connected to 172.16.1.4.
Escape character is '^]'.
GET /index.html
172.16.1.4 - - [22/Dec/2015 16:35:50] "GET /index.html HTTP/1.0" 200 -
HTTP/1.0 200 OK
Server: SimpleHTTP/0.6 Python/2.7.5
Date: Tue, 22 Dec 2015 16:35:50 GMT
Content-type: text/html
Content-Length: 22
Last-Modified: Tue, 22 Dec 2015 16:35:04 GMT

I am into server ONE!
Connection closed by foreign host.
[centos@adm ~]$ 

 

Well, and we’re done!
See you soon!

2 thoughts on “Installing OpenStack Kilo (Red Hat OSP7) LBaaS with @NuageNetworks VSP 3.2R4 (HAProxy)

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s