Example Salt state – custom firewall for space¶
The configurations in this document offer an end-to-end example of a variety of Salt topics. They show how Salt can be used to customize the environment-wide firewall script on bonders that are members of a space. The list of bonders in the space must be maintained manually.
Configuring Salt states¶
There is a single firewall configuration file for the whole
SD-WAN environment. This file is downloaded as
/etc/bonding/nftables/filter-input-99-trusted-networks.nft to all bonders, aggregators, and
PWAN-routers, and is stored at
/etc/bondingadmin/salt-config/states/node/filter-input-99-trusted-networks.nft on management
servers. However, the file need not be copied verbatim from the
management server to nodes—it can be structured to include data that
changes for each node. We use this capability to issue the standard
filter-input-99-trusted-networks.nft file to some nodes, while adding additional IPs for nodes
in a certain space.
Note
Nodes version 6.5 and lower use known_ips configuration file
To add the capability to include per-node information in the
filter-input-99-trusted-networks.nft file, insert this line in the {} section of the ip saddr line ( ensure there are no blank lines within the brackets {} ) within the file located at
/etc/bondingadmin/salt-config/states/node/filter-input-99-trusted-networks.nft:
{% for ip in pillar.get('space_known_ips', []) %}{{ip}}, {% endfor %}}
This is a Salt template command that fetches a list, called
space_known_ips, which for some nodes will be empty and for some
nodes will include one or more additional IPs. Each IP in the list,
is added to the nftables command that is written to the file.
Configuring Salt pillars¶
“Pillars” are Salt’s way of assigning specific configuration data to specific minions. For example, sensitive passwords could be provided only to the hosts that Salt knows require access to the passwords, and not to any other hosts.
To make a new list of IP addresses, you need to make a new directory in
/etc/bondingadmin/salt-config/pillars/. You can name the directory
anything you want, but we recommend you name it by the space key of the
space to which you will assign the list. For example, if the space key
is granvi, you might do:
mkdir /etc/bondingadmin/salt-config/pillars/granvi
Then create a file in that directory called known-ips.sls:
/etc/bondingadmin/salt-config/pillars/granvi/known-ips.sls
space_known_ips:
- 198.51.100.0/24
- 203.0.113.0
This list specifies the IPs to assign to the special bonds. You can
specify single IPs, which are interpreted as /32s, or you can specify
the CIDR mask size as in the example above. The file uses YAML
formatting, so add each new IP on its own line after two spaces and a
dash, as in the example. The name of the list (“space_known_ips” in
the example) must match the name of the pillar being retrieved in
/etc/bondingadmin/salt-config/states/node/known_ips.
Finally, you need to assign the appropriate pillar to the appropriate
nodes. This is done in the pillar top file at
/etc/bondingadmin/salt-config/pillars/top.sls, which is common to
the entire environment. Since this list is managed manually, you can mix
and match nodes from different spaces, or omit some nodes from a space
(for example, include only bonders, not aggregators).
/etc/bondingadmin/salt-config/pillars/top.sls
base:
'L@node-3,node-5,node-10':
- granvi.known-ips
The line containing “L@” defines a list of nodes by their minion ID. The above example includes nodes 3, 5, and 10 in the list.
Note
The list of minions must be maintained manually. It does not automatically update if a bond or aggregator is added to a space.
The node/minion IDs are found as follows:
Bonders¶
The node ID of a bonder is found in the “advanced” section of the Details pane on the bond details page:

For example, the bond in the above image is bond 87, but the bonder ID is 146, meaning the minion ID is “node-146”.
Aggregators and PWAN routers¶
The node ID is simply the aggregator ID or PWAN router ID. For example, the minion ID of aggregator 2 is “node-2”.
Testing¶
To verify the configuration, create an SSH session to one of the nodes targeted by the top.sls file. Then update the Salt configuration by running this command on the node:
salt-call state.apply
This command should return without errors. If an error message is shown, review it and address the issue.
Then run this command on the node:
nft list table inet filter
You should see the new IP addresses at the bottom of the list of accepted IPs. If you don’t see the new IPs, review the various files described above to ensure the changes have been made appropriately.
Finally, tell all the nodes to update their configuration by running this command on the management server:
salt '*' state.apply
If you have questions, please contact tech support.