Example Salt state – bonder DHCP server

Note

The DHCP configurations in this document are based on our legacy DHCP server recommendations and are kept for reference purposes. In 2016.2 a new and improved way to configure DHCP servers was introduced. Legacy DHCP server setups will continue to work in SD-WAN 2016.1, but this feature is deprecated in 2016.2.

The configurations in this document offer an end-to-end example of a variety of Salt topics. They show how Salt can be used to install and configure a DHCP server on a bonder or set of bonders. The DHCP configuration is based on our legacy DHCP server recommendations.

Configuring Salt states and default values

The following state configures the package installation, dnsmasq configuration, and iptables necessary to offer DHCP on a bonder. State files go in /etc/bondingadmin/salt-config/states/.

/etc/bondingadmin/salt-config/states/dhcp/dnsmasq.sls

dnsmasq is installed:
  pkg.installed:
    - name: dnsmasq

dnsmasq is running:
  service.running:
    - name: dnsmasq
    - require:
      - pkg: dnsmasq is installed
      - file: /etc/dnsmasq.conf

/etc/dnsmasq.conf:
  file.managed:
    - user: root
    - group: root
    - mode: 644
    - source: salt://dhcp/dnsmasq.conf
    - makedirs: True
    - template: jinja
    - require:
      - pkg: dnsmasq
  cmd.wait:
    - name: service dnsmasq restart
    - watch:
      - file: /etc/dnsmasq.conf

dhcp is allowed:
  iptables.insert:
    - position: 1
    - table: filter
    - chain: INPUT
    - jump: ACCEPT
    - proto: udp
    - sport: 68
    - dport: 67
    - if: {{ salt['pillar.get']('dnsmasq:interface', 'eth0') }}
    - save: False

This state includes four declarations:

  1. dnsmasq package is installed
  2. dnsmasq is running
  3. /etc/dnsmasq.conf has the expected contents and permissions, and dnsmasq is restarted when the file changes
  4. there is an iptables rule that accepts DHCP traffic on an interface defined in a pillar, or eth0 if no such pillar exists

This file references the file dhcp/dnsmasq.conf:

/etc/bondingadmin/salt-config/states/dhcp/dnsmasq.conf

interface={{ salt['pillar.get']('dnsmasq:interface', 'eth0') }}
dhcp-range={{ salt['pillar.get']('dnsmasq:start_addr', '10.10.0.100') }},{{ salt['pillar.get']('dnsmasq:end_addr', '10.10.0.199') }},{{ salt['pillar.get']('dnsmasq:lease_time', '12h') }}
dhcp-option=3,{{ salt['pillar.get']('dnsmasq:gateway', '10.10.0.1') }}
dhcp-option=6,{{ salt['pillar.get']('dnsmasq:dns_servers', '8.8.8.8,4.4.4.4') }}
port={{ salt['pillar.get']('dnsmasq:port', '0') }}

This configuration file includes default values, such as 10.10.0.100 as the DHCP range start, that are overridden only if a special pillar file is assigned to the node. You should change the values in your own dnsmasq.sls file (the dnsmasq:interface eth0 bit) and the various values in your dnsmasq.conf file to match your normal defaults.

Overriding DHCP values for specific nodes

You may have a standard DHCP server configuration used by most nodes, but one or a few bonders that use custom values. Perhaps the interface is eth1 instead of eth3, or the site needs to use different DNS servers. In this case you can override some or all values for these special bonders.

To override the default configuration values, create a pillar configuration file in /etc/bondingadmin/salt-config/pillars/ or a subdirectory. The pillar can override values using a form such as:

/etc/bondingadmin/salt-config/pillars/dhcp/interface-eth3.sls

dnsmasq:
  interface: eth3
  start_addr: 10.10.0.150
  end_addr: 10.10.0.160
  lease_time: 6h
  gateway: 10.10.0.2
  dns_servers: 8.8.8.8,8.8.4.4,208.67.222.123
  port: 0

This pillar overrides the default interface eth0 with eth3, DHCP start address with 10.10.0.150, and so on.

The pillar must be targeted to the nodes that need their configuration overridden. This is done through the pillar top file, such as:

/etc/bondingadmin/salt-config/pillars/top.sls

base:
  'node-2 or node-7':
    - dhcp.interface-eth3

This targets the pillar to nodes with ID 2 or 7. You can use any of Salt’s targeting mechanisms if you need a more flexible way to match minions.

Testing state on a bonder

You now need to test the state before applying it to the entire environment. To do so, log into the bonder and run the following command:

salt-call state.sls dhcp.dnsmasq

This configures the node according to the dnsmasq.sls state, applying pillar states if necessary.

Verify that DHCP works by plugging a client PC into the appropriate Ethernet interface, either directly or through a switch, and waiting for a DHCP lease.

Targeting minions

Once the state files have been verified, target the nodes that should have DHCP servers in the state top.sls file. There are two main ways to do this:

  1. Identifying individual bonders in the management server state top.sls file. This method is convenient because it can be managed centrally, requiring no changes at all on bonders, but may not be convenient to scale to a large number of nodes.
  2. Registering bonders locally to run a DHCP server. This requires writing one file on the targeted bonders, but may scale better on a larger environment.

Targeting individual bonders

When targeting individual bonders to run DHCP servers, set up a state top.sls file such as:

/etc/bondingadmin/salt-config/states/top.sls

base:
  'node-13 or node-2 or node-7':
    - dhcp.dnsmasq

This configuration targets nodes 13, 2, and 7.

Registering bonders locally

To register bonders as DHCP servers via a local file, first set up a state top.sls file targeting minions with a has_dhcp grain value of True:

/etc/bondingadmin/salt-config/states/top.sls

base:
  'has_dhcp:True':
    - match: grain
    - dhcp.dnsmasq

This tells minions that they should start a DHCP server if they have the has_dhcp grain.

Since no bonders have that custom grain by default, you need to set the grain value on each targeted bonder by writing the grains file on the node.

/etc/salt/grains

has_dhcp: True

This tells the bonder that it should set up the DHCP server next time it applies its states.

Applying configuration to all nodes

To apply the state to all targeted nodes, run this command:

salt '*' state.apply