Node configuration management with SaltStack¶
SD-WAN uses the SaltStack configuration management system (also known simply as Salt) to manage certain properties of bonders, aggregators, and private WAN routers. This open-source software can be extended to manage almost any characteristic of a node, such as:
- file contents and permissions
- users, passwords, and groups
- iptables rules
- installed software packages and package repositories
It also allows actions to be taken when the configuration is changed, to run commands remotely on nodes, and to collect and report system information from each node.
Note
Salt integration is supported for nodes running SD-WAN 2015.4 or later.
Salt core concepts¶
A thorough description and tutorial on Salt concepts and usage is beyond the scope of this document. Only a basic introduction is provided. We recommend you perform the following tutorials to gain a basic understanding of Salt:
Salt components¶
For the full details, see Salt’s component overview.
Master¶
The Salt master is the central server where configuration information is managed. This service runs on the management server and sends the configuration files to nodes.
This service is called bondingadmin-salt-master and its configuration is stored
at /etc/salt/master and /etc/salt/master.d/.
Minion¶
“Minion” is Salt’s term for a managed host—what we call a node. The minion client runs on each host, downloads its configuration information, and ensures its configuration adheres to the requirements defined on the master.
This service is called salt-minion and its configuration is stored
at /etc/salt/minion ``and ``/etc/salt/minion.d/. The minion ID,
which is used to identify itself uniquely to the master, is “node-<ID>”
where <ID> is the node ID of the host.
States¶
A Salt state is the key configuration element. It is a declaration of a certain property (i.e. a certain file should have exactly these contents and permissions) that must be true after the state is applied.
Grains¶
A “grain” is a field of information about a minion. For example, the following grains are available for every host:
bondingversion(version of SD-WAN)cpu_model(i.e. Intel(R) Core(TM) i7-2600 CPU @ 3.40GHz)cpuarch(i.e. x86_64)host(the hostname)kernelrelease(i.e. 3.16.0-4-amd64)os(always Debian)osrelease(i.e. 7.9)mem_total(amount of memory in MB)saltversion(version of SaltStack)type(bonder, aggregator, or private WAN router)
Grains can be used to assign states to certain minions (for example, only configure SNMP on Debian 7+ hosts) or for reporting purposes.
Pillars¶
A “pillar” is a field of configuration information applied to one or more minions. Pillar information is only sent to nodes that are targeted to the pillar, so pillars can be used for sensitive information such as passwords or keys.
Top files¶
There are two “top files” in the Salt environment:
- The state top file is where specific states are applied to specific minions. A variety of targeting techniques can be used to define the states for a node—full details are available at https://docs.saltstack.com/en/develop/topics/targeting/index.html.
- The pillar top file is where specific pillars are applied to minions. For example, if a bonder uses a different DHCP configuration than most bonders, it can have a specific DHCP pillar assigned to it that overrides the default configuration.
Keys¶
The Salt network services use an AES encryption scheme with keys generated on the master and minions. The master automatically accepts minion keys whose fingerprints match the fingerprints reported through the core SD-WAN channel.
Other stuff¶
Salt supports many other features that are not integrated into the SD-WAN environment, such as “reactors”, “runners”, and “returners”. Read the Salt documentation to find out exactly what those are. These features are not integrated in the SD-WAN environment at this time, although we’d appreciate hearing from you if you were interested in using them.
Salt workflow¶
Salt has two main workflows.
State application by minion service¶
The first workflow is managed by the Salt service running on each minion. When the service starts, it connects to the master, downloads all the state information, determines which states apply to it, and executes the states. After executing the states, the minion’s configuration will match the config specified on the master. This requires no interaction from a user.
Command execution and state application by master service¶
The second workflow allows interaction from a user on the management server. The user can cause the master service to broadcast a command to all minions or to a certain set of minions. The command can be a request to execute their states and refresh their configuration, to report grain information, to execute a specific command locally, or a perform a number of other functions.
Default managed files¶
By default, the Salt environment in SD-WAN manages the following files on all nodes:
/etc/firewall.d/known_ips/root/.ssh/authorized_keys/etc/apt/sources.list
It also manages this file on bonders only: /etc/resolv.conf
Customizing configuration management¶
Here’s the part you were really interested in—how to customize Salt to manage the configuration of your nodes.
Targeting minions¶
States are applied to minions via the top file at
/etc/bondingadmin/salt-config/states/top.sls.
Now would be a good time to review Salt’s minion targeting documentation.
Everything in the top file must come under the partner key. This tells
Salt that the states are part of the partner environment, which is
merged into the default base environment supplied by SD-WAN.
Within the partner key, normal Salt matches are used to apply states to
selected minions. If you want to check the top data that a minion will
use for a highstate, you can run salt '*' state.show_top on the
management server.
Here’s an example top file:
/etc/bondingadmin/salt-config/states/top.sls
partner:
'*':
- node.snmp
'type:bonder':
- match: grain
- bonder.resolvconf
- bonder.dhcp
'node-1':
- node.cowsay
The '*' match selects the included states for all nodes, while the
'type:bonder' match selects only bonders, because the first element
in its list is 'match: grain', identifying 'type:bonder' as a
grain match. Finally, 'node-1' matches only the host whose ID is
node-1.
Defining states¶
States and resources referenced from states must be stored in the
/etc/bondingadmin/salt-config/states directory. By default this
directory includes a few sub-directories in which you can store your
states. You can create new directories for your custom states.
States are covered thoroughly in the Salt documentation and example states are provided in the pages listed in the table of contents above.
Note
For compatibility with the system’s own states and pillars, do not use the prefix “internal_” in the name of any new state or pillar.
Testing new states¶
When creating or updating a state, we recommend you assign the new state to one or a small set of nodes. This prevents an error in the state from propagating to many nodes and causing issues on a large scale. When the state is been validated, it can be assigned to the rest of the environment.
There are two ways to target a small set of nodes. The first is with a limited top.sls target, such as:
partner:
'node-1':
- node.cowsay
This would apply the node.cowsay state only to the node with ID 1 when the node’s overall states were executed.
The second method is to call a state manually on a node via the command line. The command is:
salt 'node-1' state.sls node.cowsay
This would cause the node with ID 1 to execute the node.cowsay state
once. As long as the node wasn’t permanently targeted in top.sls, it
would not apply node.cowsay when its overall states were executed.
Applying states¶
Once configuration states are created and top.sls targets nodes appropriately, it’s very easy to cause nodes to configure themselves. Simply run this command on the management server command line:
salt '*' state.apply
This tells each minion to apply its overall state immediately. Minions that aren’t connected will time out. A report will be printed showing the actions that every connected node took to apply its state.
If you have a large environment, it might be a good idea to limit the rate at which nodes configure themselves. This will reduce the load on the management server. Salt allows you to limit your command so that minions are controlled in small batches. For example:
salt '*' --batch-size 10% state.apply
This performs state.apply in batches of 10% of the environment each
time. Salt has its own documentation on batch
sizes.
Running commands from a minion¶
You don’t have to run commands only from the master server. You can also
run commands directly from a minion, using the salt-call command.
For example, to verify if the minion is connected to the master, you
could call:
salt-call test.ping
Or to apply all the states targeting the minion in the master top.sls file, you could run:
salt-call state.apply
Security considerations¶
All top.sls and state information is available to all minions. So don’t put passwords or other sensitive information in state files. Sensitive data can be stored in a pillar.
Further information¶
There’s numerous places to get further information about Salt: