Firewall management

In version 6.6 and above, nftables is used to provide firewall and NAT functionality. In earlier versions, iptables is used.

In both cases, it is possible to inject custom rules, but customizations should be performed according to the instructions on this page to avoid conflicts and undefined behaviour.

nftables (6.6 and above)

In 6.6 and above, the nftables policy is managed as a complete ruleset via the bonding-nftables service. Rules can be injected by placing them in specially named files under the /etc/bonding/nftables directory. Rules from these files will be included in the ruleset any time it is loaded or reloaded.

The naming convention for the files is based on the table, chain, and, in the case of the nat tables, the IP family. For example, any file that starts with filter-input- and ends with .nft, such as filter-input-my-custom-rules.nft will be loaded into the input chain of the filter table.

Warning

Since the ruleset is loaded as a whole, invalid syntax in the rules in these files will prevent the ruleset from being loaded at all.

Please ensure that all rules are valid, otherwise the security and/or functionality of the system may be compromised.

For more information in nftables, see the official documentation.

To load custom rules after modification, run the following command on the device:

systemctl reload bonding-nftables

Chains

The following chains are defined.

filter input

This is used to allow external hosts to access services on the bonder device itself. Normally all access is blocked except for what is required for the normal operation of the device.

Files with a prefix of filter-input- will be loaded into this chain. The chain will always be present even when the bonding service is stopped.

filter forward

This is used to define policy for traffic as it is forwarded between networks. In particular, the classification rules for QoS are defined here. This chain can be used to insert rules to block forwarded traffic as well.

Normally, all traffic is allowed to route between networks, except between different private WAN spaces and the public routing table.

Files with a prefix of filter-input- will be loaded into this chain.

filter mangle_forward

This is used to adjust packets as they are forwarded between networks. The main use is to enforce MSS clamping on TCP sessions to account for reduced MTU sizes.

Files with a prefix of filter-mangle-forward- will be loaded into this chain.

It is not recommended to add custom rules to this chain unless directed by support.

filter mangle_prerouting

This is used to classify packets for special handling as they are forwarded between networks. The main use is to direct TCP sessions into the TCP proxy when it is enabled.

Files with a prefix of filter-mangle-prerouting- will be loaded into this chain.

It is not recommended to add custom rules to this chain unless directed by support.

nat prerouting

This is used primarily to perform destination NAT (DNAT). A common use would be to translate a public address to a private one for a given TCP or UDP port before performing a routing lookup, a process commonly known as “port forwarding”.

There are separate tables defined for IPv4 and IPv6. Files with a prefix of nat-prerouting-ipv4- will be loaded into the IPv4 chain while files with a prefix of nat-prerouting-ipv6- will be loaded into the IPv6 chain.

nat postrouting

This is used primarily to perform source NAT (SNAT) or masquerading. A common use would be to translate a private address to a public one to allow it to access hosts on public networks.

There are separate tables defined for IPv4 and IPv6. Files with a prefix of nat-postrouting-ipv4- will be loaded into the IPv4 chain while files with a prefix of nat-postrouting-ipv6- will be loaded into the IPv6 chain.

Examples

Allowing management access

Say we wanted to add rules allowing SSH and SNMP to a node from a particular set of IPv4 and IPv6 addresses. We can create a file called /etc/bonding/nftables/filter-input-allow-management.nft and put the following rules in it:

ip saddr { 198.51.100.0/24, 203.0.113.28 } tcp dport 22 accept
ip saddr { 198.51.100.0/24, 203.0.113.28 } udp dport 161 accept
ip6 saddr { 2002:db8:40::/64, 2002:db8:10::42 } tcp dport 22 accept
ip6 saddr { 2002:db8:40::/64, 2002:db8:10::42 } udp dport 161 accept

CPE NAT IP port forwarding

CPE NAT IP is used to allow a private IP range behind a bonder to access global routing by translating the private addresses to a single globally-routable address. By default, all traffic will be directed to a single IP address in a connected range, but it is commonly required to direct specific TCP/UDP ports to other addresses in the range.

For example, say we have a private connected IP of 192.168.1.1/24 and there is a CPE NAT IP of 203.0.113.23 with a destination of 192.168.1.1. This is a common setup where the bonder acts as the primary gateway for a LAN.

Also, say the local network has a web server on 192.168.1.2 and a mail server on 192.168.2.3. Both servers need to be available from public networks. We will need to forward public traffic destined to the public IP address on ports 80 and 443 to the web server, and ports 25 and 587 to the mail server.

To accomplish this, we can create a file called /etc/bonding/nftables/filter-input-allow-management.nft and put the following rules in it:

ip daddr 203.0.113.23 tcp dport { 80, 443 } dnat 192.168.1.2
ip daddr 203.0.113.23 tcp dport { 25, 587 } dnat 192.168.1.3

Additionally, say we wanted to translate a port number to allow for the same port on multiple private hosts to be accessible via the public IP on different ports. A common use is to allow SSH to multiple hosts.

To map port 2022 on the public IP to 22 on the web server and 3022 to the mail server, we can add the following rules:

ip daddr 203.0.113.23 tcp dport 2022 dnat 192.168.1.2:22
ip daddr 203.0.113.23 tcp dport 3022 dnat 192.168.1.3:22

Iptables (6.5 and below)

Customizing firewall behaviour

To customize firewall behaviour, create a new script in /etc/firewall.d/. When the script /etc/init.d/firewall is run, your script will be called with the same argument as the argument to /etc/init.d/firewall:

  • start, to add the required behaviour
  • stop, to remove the installed behaviour
  • restart, to remove and add the required behaviour
  • force-reload, to reload the behaviour without restarting services, if possible
  • status, to display a short message describing the current status of the behaviour

These arguments match the arguments to standard Debian init scripts.

Hooks are called with the argument “start” at system boot and “stop” at shutdown.

An example firewall script is available at /usr/share/doc/bonding/examples/firewall.

Note

A firewall script should handle running with the start argument multiple times, with no call to stop, without adding multiple copies of the behaviour. Consider calling the stop function at the beginning of the start function, but ensure the stop function does not fail if the behavior has not yet been installed (i.e. when the function is called at system boot).

Scripts in /etc/firewall.d are called in the order of their names. For example, a script named a would come before b but after 00_z. Consider starting a script name with a number (i.e. 10_private_nat) to make it obvious when it will run in relation to other scripts in the directory. When stopping the firewall, scripts are run in the reverse order. Script names must use only upper and lower case letters, digits, underscores, and hyphens. Note that the dot character is forbidden.

Running the firewall script

The firewall script is executed with the service command. For example,

service firewall restart

The firewall is controlled with the script /etc/init.d/firewall. This script runs each executable file in the directory /etc/firewall.d. The following arguments are supported:

  • start: enables the firewall
  • stop: disables the firewall
  • restart: disables, then enables the firewall
  • force-reload: disables, then enables the firewall
  • status: shows status information about the firewall

The firewall is automatically started when the system boots and stopped when the system shuts down.