===============================================
AN-003 Deploying aggregators on Microsoft Azure
===============================================

Microsoft Azure is a popular cloud computing platform similar to Amazon Web
Services that allows for quick deployment and management of Internet-connected
servers. It is most popular for hosting web sites, but can be used to deploy
aggregators as well, with some caveats.

The issues described here may also be applicable to other cloud providers that
use private networking.

Installing the aggregator
=========================

Currently, aggregators are only supported using Debian 8 "Jessie". When
creating the instance, ensure the Debian 8 image is used. Once installed,
follow the instructions `here
<../nodes/provisioning-nodes/node-installation-methods/installing-on-an-existing-debian-host-or-using-standard-debian-isos.html>`__
to complete the setup.

Networking considerations
=========================

The main issue with IPv4 networking in Azure is that there is no option for
routing subnets to hosts. You can only allocate single IPs, and those IP
addresses are randomly assigned, so it is not possible to even emulate subnet
routing with IPv4. It can be emulated with IPv6 addresses, but each address in
each range will need to be assigned to the aggregator interface, which is
burdensome.

The other issue is that public IPv4 addresses are never assigned directly to
instance interfaces. All public IPv4 addresses are assigned to a private
address and translated using NAT. While aggregators are capable of handling
tunnels in such a scenario, it makes bond routing complicated

However, even with the above limitations, there are some scenarios that work
if set up in a particular way.

CPE NAT IP to individual bonds
==============================

This scenario involves providing individual per-bond access to the Internet,
each with its own public IP address.

In a typical deployment, a subnet which contains the IPs available for use for
CPE NAT IPs would be routed to the aggregator network, and the aggregator
would announce each individual IP via OSPF or iBGP. If dynamic routing was not
available, static routes could be made to the individual aggregators hosting
the bonds, but that would require manual routing changes if a bond was moved.

In Azure, each public IP addresses is assigned to a private IP address, which
would presumably be configured on an interface. By default, when a virtual
instance is created, a network interface is created, a dynamically chosen
private IP address is assigned to and configured on the interface, and a
public IP address is associated with the private IP address.

For an aggregator, we can use this address for its primary communication, but
we will need additional addresses for each CPE NAT IP. Azure public IP
addresses can be used to provide these additional addresses.

Adding the Public IP Address
-----------------------------

To add a new IP address in the Azure portal, perform the following steps:

1. Navigate to the virtual machine running the aggregator the CPE NAT IP will
   be assigned to
#. On the left menu, select **Networking**
#. Click the link to the network interface near the top
#. Select **IP configurations** on the left menu
#. Click **Add**
#. Set the **Name**
#. Set **Public IP address** to **Enabled**
#. Click **Configure required settings** under **IP address**
#. Click **Create new**
#. Set the name to anything. The same value as for the private IP address name
   given above is advised
#. Set **Assignment** to **Static**
#. Click **OK**
#. Take note of the newly assigned private IP address and its associated
   public IP address

.. note::
    You do **not** need to configure the private or public address in the
    aggregator's network configuration

Configuring the CPE NAT IP
--------------------------

On a bond assigned to the aggregator, configure a private connected IP and add
the CPE NAT IP using the **private** IP address we added to the interface. The
Azure gateway will NAT the public IP address to the private CPE NAT IP
address, and the bonder will NAT the private IP address to the connected IP.

Assuming the bonder is online, all of the routing will be in place.

.. note::
    Due to the lack of dynamic routing for public IP addresses in Azure,
    moving bonds between aggregators will not normally work unless the static
    associations are updated via the Azure portal or API. As a consequence,
    aggregator failover will **not** work in this scenario.

Optional: Route CPE NAT IPs across a group of aggregators
---------------------------------------------------------

Since the public IPs are associated with interfaces on specific instances,
each bond must be assigned to a specific aggregator or routing will break if
moved. However, as of Bonded Internet 6.5, we can actually allow bonds to move
to other aggregators without needing to update the association, at least not
right away.

With Bonded Internet 6.5 we can implement custom routing protocols to allow
for an aggregator with associated IP to itself route to the correct
aggregator.

In a typical data center deployment, CPE NAT IP routes are distributed across
the main physical segment of the aggregators to a local router via a protocol
such as BGP, OSPF, or Babel. Unfortunately Azure does not allow such peering
and addresses must be routed statically via the Azure API or portal. We can,
however, peer the aggregators with each other using BGP so that bonds can move
between aggregators without breaking routing. Note that the Azure virtual
networks do not allow multicast traffic, so OSPF and Babel will not work
unless tunneled.

See `here <../dynamic-routing/configuring-dynamic-routing-in-bonding.html>`__ for
more information on setting up BGP between aggregators. Note that CPE NAT IPs
are always exposed in the main routing table, so the protocols should not have
a space defined.

Private WAN
===========

.. note::
   The **With private WAN routers** mode of private WAN will not work with
   Azure since the GRE protocol used in that mode is blocked within Azure
   virtual networks. Use the "Managed mesh" or "Unmanaged" mode instead.

Private WAN can also be deployed in Azure, but due to the lack of layer-2
networking, the private WAN traffic will have to transit across VXLANs to get
to other aggregators. This means using the **Managed mesh** or **Unmanaged**
private WAN modes.

Also, note that Azure does not allow aggregators to connect to multiple
virtual networks even with additional interfaces. To get around this we will
need to set up VXLAN peering from the aggregator to a virtual machine hosted
within the target resource group.

Due to these complications, it is highly recommended to simply place dedicated
aggregators for each customer into their own Azure resource groups, disable
private WAN, and set up peering between aggregators via BGP and static
routing.

If you do need to extend a space into a network due to the need to integrate
with existing aggregators, or in order to use larger sized aggregators for
multiple spaces, follow the steps in this section.

Adding a new resource group for the space
-----------------------------------------

If the customer does not already have it's own Azure account or resource
group, a new one will need to be added:

1. Navigate to **All services**
#. Under **General** select **Resource groups**
#. Click **Add**
#. Set **Resource group** to a descriptive name
#. Set **Region** appropriately
#. Click **Review + create**
#. Click **Create**

Adding a VXLAN endpoint
-----------------------

We need to create a VXLAN endpoint on the target resource group. This machine
can be any Linux distribution or router with VXLAN support. However, the
easiest option is to simply add an aggregator to a private WAN in **Managed
mesh** mode but without adding any bonds. The main interface will be used for
the VXLAN traffic, while an additional interface will be used to carry the
internal private WAN traffic.

.. note::
   If you need to provide public Internet access for the private WAN, consider
   using a bare Linux distribution or a dedicated router distribution such as
   VyOS. They will be able to terminate the VXLAN, route the Azure networks,
   and NAT the private WAN networks. Setup of such a gateway is beyond the
   scope of this document.

Since this aggregator does not handle bonds, it does not have the same CPU and
memory requirements of a normal aggregator. Any small instance should suffice.

Once the instance has been created, we will need to add a secondary interface:

1. Navigate to the virtual machine for the endpoint
#. At the top of the page, click **Stop**
#. Wait for the machine to be stopped
#. On the left menu, select **Networking**
#. At the top of the page, click **Attach network interface**
#. Click **Create network interface**
#. Set **Name**
#. Click **Create**
#. Wait for the interface to deploy. This will take a minute or so and a
   notification will be displayed when it's done
#. Navigate to the newly created interface
#. On the left menu, select **IP configurations**
#. Set **IP forwarding** to **Enabled**
#. Click **Save**
#. Go back to the **Attach network interface** dialog for the virtual machine
#. Select the newly created interface
#. Click **OK**
#. Wait for the interface to be added. A notification will be displayed
#. Navigate back to **Networking** for the virtual machine
#. Select the tab for the new interface
#. Take note of the assigned private IP
#. Navigate back to the virtual machine
#. Click **Start**

Next, we need to set up the secondary interface on VXLAN peer. If using an
aggregator, this can be configured in bondingadmin:

1. Navigate to the aggregator defined for the virtual network
#. Click the interfaces **Add** button
#. Set **Interface name** to the second interface. It will likely be called
   ``eth1``
#. Set **Space** to the desired space
#. Click **Add IP address**
#. Set the private IP address assigned to the interface by Azure
#. Click **Add**

Routing between Azure virtual networks and private WAN networks
---------------------------------------------------------------

Next, you will need to add routes on the private WAN as well as the Azure
portal to ensure hosts can talk to each other.

When we added the interface to the VXLAN endpoint aggregator, it automatically
injected the route for the subnet the interface's IP is in. If all of the
hosts the private WAN needs to access are within that subnet we don't need to
set up any additional routes in bondingadmin. However, if there are additional
subnets in Azure that need to be accessed, we can set up a static protocol to
inject those routes:

1. Navigate to the VXLAN endpoint aggregator defined for the virtual network
#. Click the protocols **Add** button
#. Set **Name**
#. Set **Space** to the desired space
#. Set **Protocol** to **Static**
#. For every additional subnet that needs to be routed:

   a) Click **Add route**
   #) Set **Network** to the subnet
   #) Set **Destination** to **Gateway**
   #) Set **Address** to the first address in the interface's subnet

.. note:
   If you want to add a default route via Azure, you will need to add a NAT
   gateway host in the Azure virtual network and route via that instead of the
   normal Azure default gateway. Set up of such a gateway is beyond the scope
   of this document.

On the Azure side, routes for the bonds in the private WAN will need to be
added to the Azure portal:

1. Navigate to **All services**
#. Under **Networking** select **Route tables**
#. If no route table is associated with the resource group:

   a) Click **Create route table**
   #) Set **Name**
   #) Set **Resource group** to the resource group used by the space
   #) Click **Create**
   #) Wait for the route table to be created

#. Navigate to the route table for the resource group
#. In the left menu, select **Routes**
#. For each route inside the private space that needs to access resources in
   the Azure resource group:

   a) Click **Add**
   #) Set **Route name**
   #) Set **Address prefix** to the subnet
   #) Set **Next hop type** to **Virtual appliance**
   #) Set **Next hop address** to the IP address configured on the secondary
      interface
   #) Click **OK**

#. In the left menu, select **Subnets**
#. Click **Associate**
#. Set **Virtual network** to the virtual network associated with the space's
   resource group
#. Select all subnets that need to be routed to and from the private WAN
   networks
#. Click **OK**

Optional: Add direct VXLAN peering
----------------------------------

.. note::
    This is optional for the **Managed mesh** mode, but required for the
    **Unmanaged** mode. Also, it is probably unnecessary unless you are
    routing large amounts of traffic over the private WAN.

Since we know what the private IP addresses of the aggregators are, we can
provide a more direct path between the aggregators by adding custom VXLANs to
the involved aggregators within the Azure virtual networks.

A unique VNI should be created for each space. Each VXLAN for the space should
use that assigned VNI. Failure to do so will result in broken routing or
unexpected cross routing.

Also, choose a unique subnet within the private WAN to assign to the VXLAN
interfaces. Each aggregator will need a unique address in the subnet.

For the routing protocol, we will use Babel. OSPF will also work, but Babel is
a more modern protocol that is better suited to this kind of application and
is simpler to implement.

On each aggregator, for each space, do the following:

1. Click the interfaces **Add** button:

   a) Set **Type** to **VXLAN**
   #) Set **Interface name** to something unique per space
   #) Set **Space** to the desired space
   #) Set the **VNI** to the unique VNI chosen for the space
   #) Click **Add IP address** and set a unique IP address n the subnet chosen
      for the space
   #) For all other aggregators involved in the space, click **Add endpoint**
      and set the aggregator's private IP address and the VNI chosen for the
      space

#. Click the protocols **Add** button:

   a) Set the **Name**
   #) Set **Space** to the desired space
   #) Set **Protocol** to **Babel**
   #) Click **Add interface**
   #) Click **Add interface pattern**
   #) Set **Pattern 1** to the interface name chosen for the interface in the
      space
