Proxmox Cluster (Tucana Cloud) - Basic Network Automation with Systemd - Part V

Systemd can be used to automate the initial configuration of the Tucana cloud hypervisor network.

The configuration consists of a simple logic. First, enable VLAN filtering on the hypervisor bridge. And then, create a veth pair that is used to manage proxmox.

1) Configuration Files

Let's create the configuration files that reflect the network needs.

1.1) Bridge Configuration

The bridge configuration file only needs to specify whether it should filter VLAN.

[
    {
    "br":"vmbr0",
    "vlan_filtering":"1"
    }
]
/root/network/config

Also, the /etc/network/interfaces file should contain the below configuration for the bridge.

########### BRIDGE 01 ###########
auto vmbr0
iface vmbr0 inet manual
	bridge_ports none
    bridge_stp off
    bridge_fd 0
#################################
/etc/network/interfaces

1.2) Veth Configuration

The veth configuration pair has more information. It is needed to define the pair name, vlan to connect along with the management IP address and routes.

[
    {
        "pair":["veth0", "veth1"],
        "bridge": "vmbr0",
        "vlan": [{"vid":10, "pvid":"untagged"}],
        "ip":"192.168.10.100/24",
        "route": [
            {"network":"192.168.10.0/24", "is_default":1, "via":"192.168.10.1", "dev":"veth0"}
        ]
    }
]
/root/network/config

1.3) Proxmox External Interface

The Tucana HV3 external interface is connected to the bridge and is the trunk of the virtual machines' traffic.
The high-level diagram below illustrates how the network is connected.

For this reason, the HV3 will need a third systemd service to configure the network.

2) Systemd Service Modules

Hypervisor 1&2 only needs the bridge and veth services configured. However, as explained above, hypervisor 3 will need a third module to configure its external interface.

To configure systemd, create the below files and enable them with the respective commands below.

2.1) Bridge Module

[Unit]
Description=Service to create the veth device that proxmox will listen to.
After=network-online.target
Wants=network-online.target

[Service]
Type=oneshot
ExecStart=/bin/bash /root/network/config/hv3_net_veth.sh
RemainAfterExit=true

[Install]
WantedBy=multi-user.target
systemd enable tucana-pve-veth.service
┬─[root@hv3:/e/s/system]─[05:49:51 PM]
╰─>$ systemctl status tucana-pve-bridge.service
● tucana-pve-bridge.service - Service to add VLAN filtering and delete VLAN to bridges
     Loaded: loaded (/etc/systemd/system/tucana-pve-bridge.service; enabled; vendor preset: enabled)
     Active: active (exited) since Sun 2023-07-16 16:55:08 BST; 55min ago
    Process: 4102 ExecStart=/bin/bash /root/network/config/hv3_net_bridge.sh (code=exited, status=0/SUCCESS)
   Main PID: 4102 (code=exited, status=0/SUCCESS)
        CPU: 120ms

Jul 16 16:55:08 hv3 systemd[1]: Starting Service to add VLAN filtering and delete VLAN to bridges...
Jul 16 16:55:08 hv3 systemd[1]: Finished Service to add VLAN filtering and delete VLAN to bridges.

2.2) Veth Module

[Unit]
Description=Service to create the veth device that proxmox will listen to.
After=network-online.target
Wants=network-online.target

[Service]
Type=oneshot
ExecStart=/bin/bash /root/network/config/hv3_net_veth.sh
RemainAfterExit=true

[Install]
WantedBy=multi-user.target
systemctl enable tucana-pve-veth.service 

2.3) External Interface Module

[Unit]
Description=Service to configure the HV3 external interface(enp0s31f6) VLANs.
After=network-online.target
Wants=network-online.target

[Service]
Type=oneshot
ExecStart=/bin/bash /root/network/config/hv3_net_vps.sh enp0s31f6
RemainAfterExit=true

[Install]
WantedBy=multi-user.target
systemctl enable tucana-pve-external-interface.service 

3) Conclusion

With the steps above, we have successfully automated our hypervisor management network and VLAN-aware bridge.