=========================
Provisioning aggregators
=========================

The aggregator provisioning process is very similar to the process for
imaging bonders, but there are a few extra steps, such as integrating
the host into the datacentre's dynamic routing.

Before provisioning an aggregator, you must create its `record in the
management server <adding-and-updating-aggregators.html>`__.

Virtualization considerations
------------------------------

If using virtualization, please familiarize yourself with the
recommendations in `Virtualization best
practices <../nodes/supported-environments-for-nodes.html#virtualization-best-practices>`__.

To set up the aggregator guest, follow these steps:

#. Configure the guest as 64-bit Debian or OpenSUSE Linux. 32-bit is
   not supported. If using Xen, configure as PV-GRUB or HVM.
#. Configure one NIC. If using VMware, use the VMXNET 3 driver. (If
   VMXNET3 is not offered, you may need to verify that the guest OS type
   is registered as Debian or Ubuntu Linux, not generic Linux.)

System requirements
---------------------

See `System Requirements <../nodes/system-requirements.html#Aggregators>`__ for
aggregators.

Aggregator installation procedure
----------------------------------

To provision a new aggregator, follow these steps:

1. Provision the aggregator operating system and install SD-WAN
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

The provisioning procedure for aggregators is the same as for bonders.
Follow the instructions in `Node installation
methods <../nodes/provisioning-nodes/node-installation-methods/index.html>`__. We recommend using
the `custom
ISO <../nodes/provisioning-nodes/node-installation-methods/installing-from-a-bonded-internet-iso-debian-10.html>`__
when imaging aggregators.

The operating system and SD-WAN will be installed just as it is
for bonders. You will need to provide the aggregator's node key as
described in `Initial Bonding
configuration <../nodes/provisioning-nodes/initial-bonding-configuration.html>`__.

2. Reconfigure network settings
++++++++++++++++++++++++++++++++

.. note::
    This step is not necessary if you provisioned the aggregator in-place
    with its production IP address.

The aggregator will be configured to get a DHCP IP
address on the network interface used when it was provisioned. However,
aggregators generally use statically-assigned IP addresses in
production. To set a static IP address, log in as the root user with the
default password shown on the Node Setup tab of the space whose ISO was
used to provision the aggregator. Update the ``/etc/network/interfaces``
file with the static IP, netmask, and gateway on the appropriate
interface.

The ``interfaces`` file should contain a block such as:

::

    auto <interface>
    iface <interface> inet static
        address <IP address/netmask>
        gateway <gateway>

For example, for an aggregator with IP address 203.1.113.253 on eth0,
with netmask 255.255.255.0 and gateway 203.1.113.254, the ``interfaces``
file should contain the following block. Note that the
``iface eth0 dhcp`` line has been changed to ``iface eth0 inet static``.

::

    auto eth0
    iface eth0 inet static
        address 203.1.113.253/24
        gateway 203.1.113.254

.. note::
    The interface being used for all bonding traffic must be the same
    as the interface being used for the default gateway.
    For example, if eth0 is configured as the main interface for bonded traffic,
    then the default gateway should also originate from eth0

To configure an IPv6 address as well, **append** something like this:

::

    iface eth0 inet6 static
        address 2001:db8::c0ca:1eaf/64
        gateway 2001:db8::1ead:ed:beef

.. note::
    An IPv4 address is always required.

To use a VLAN on the main network interface, use a block such as:

::

    auto <interface>
    auto <interface>.<vlan ID>
    iface <interface>.<vlan ID> inet static
        address <IP address/netmask>
        gateway <gateway>

For example, using VLAN 20:

::

    auto eth0
    auto eth0.20
    iface eth0.20 inet static
        address 203.1.113.253/24
        gateway 203.1.113.254

After reconfiguring the network, we recommend you reboot the host to
ensure the file has been updated properly. You can then shut down the
aggregator and move it to its production location.

See
https://wiki.debian.org/NetworkConfiguration#Configuring_the_interface_manually
for further information.

3. If using VMware, install VMware tools
+++++++++++++++++++++++++++++++++++++++++

If the aggregator is hosted on VMware, install the open-source VMware
tools:

::

    apt-get install open-vm-tools -y

After installing VMWare tools, restart bonding or reboot the aggregator.

4. Configure link aggregation if necessary
+++++++++++++++++++++++++++++++++++++++++++

This step is not necessary if the aggregator is not using Ethernet link
aggregation.

If the aggregator uses 802.3ad Ethernet link aggregation, a TX queue for
the bond interface must be created or SD-WAN QoS will not work
and performance will be poor. By default, bond interfaces do not have TX
queues. To specify a TX queue, run the following command:

::

    ip link set <bond interface> txqueuelen <tx queue length—recommend 1000 packets>

For example, for an interface called bond1 and a TX queue length of
1000, run:

::

    ip link set bond1 txqueuelen 1000

Debian
++++++

This setting is not persistent across reboots. To apply the setting on
boot, add an option to the interface configuration section in
``/etc/network/interfaces``:

::

    post-up ip link set bond1 txqueuelen 1000

For example:

::

    auto bond1
    iface bond1 inet static
        address 203.1.113.253/24
        gateway 203.1.113.254
        slaves eth0 eth1
        post-up ip link set bond1 txqueuelen 1000
        # Other bond options...

For further information on 802.3ad link aggregation in Debian, read
https://wiki.debian.org/Bonding.

OpenSUSE
++++++++

Create the ``/etc/sysconfig/network/ifcfg-bond0`` and configure the bonding device.

::

    BOOTPROTO='none'
    STARTMODE='auto'
    IPADDR='192.168.0.1/24'

    BONDING_MASTER='yes'
    BONDING_SLAVE_0='eth0'
    BONDING_SLAVE_1='eth1'
    BONDING_MODULE_OPTS='mode=802.3ad miimon=100 lacp_rate=1 xmit_hash_policy=layer2+3'

Make sure the mode in the `BONDING_MODULE_OPTS` parameter matches the desired
setup. The above example is set up to configure an 802.3ad link aggregated interface
to a compatible switch with the appropriate configuration applied.

Create the ``/etc/sysconfig/network/ifcfg-eth0`` and ``/etc/sysconfig/network/ifcfg-eth0`` files
and configure interfaces.

::

    BOOTPROTO='none'
    STARTMODE='hotplug'
    IPADDR='0.0.0.0'

For further information on 802.3ad link aggregation in OpenSUSE, read
https://doc.opensuse.org/documentation/leap/reference/html/book-reference/cha-network.html#sec-network-iface-bonding

5. Configure dynamic routing if necessary
++++++++++++++++++++++++++++++++++++++++++

This step is not necessary if your core network does not use dynamic
routing.

Configure protocols on the aggregator to integrate with your dynamic
routing network. Follow the instructions in `Configuring dynamic
routing in bonding
<../dynamic-routing/configuring-dynamic-routing-in-bonding.html>`__.

6. Enable filesystem monitoring if necessary
+++++++++++++++++++++++++++++++++++++++++++++

This step is optional.

If an aggregator encounters hard disk errors, bonded network traffic may
be interrupted even though the aggregator can still respond to the
management server's failover checks. This leads to an outage for all the
bonds on the aggregator even if aggregator failover has been configured.

To prevent this, SD-WAN includes a utility that can monitor the
main disk mount and bring down all network access except SSH if it finds
that the mount has changed from read-write mode to read-only mode, which
indicates hard disk issues. This guarantees that the aggregator stops
responding to failover checks from the management server so that
aggregator failover works as expected. SSH access is still allowed so
that an administrator can try to log in and investigate the disk error
manually.

Hard disk monitoring is disabled by default. To enable it, run these
commands on the aggregator:

::

    echo 'fm:235:respawn:/usr/lib/bonding/fsmonitor' >> /etc/inittab
    init q

The ``init q`` command starts the monitoring service immediately. No
reboot is required, and it will also be started on future reboots.

After a disk read-only change has been detected and the network
interrupted, the host must be rebooted to restore network access.
