Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Info

To avoid PTP frames interrupting with IEC 61-850 traffic, the best is to isolate PTP frames on a VLAN. This is done with the variable `ptp_vlanid` in the ansible inventories.

Reconfiguring the network entirely with netplan

It is possible to redefine the SEAPATH network entirely using netplan.io yaml files. This behavior is achieved by the variable netplan_configurations  in the inventory. Examples are provided in the Ansible repository.

...

Info

For now, using netplan disables all network configuration on SEAPATH. Everything, including the administration and cluster network, has to be handled by netplan yaml files.

The goal would be to be able to define the network entirely, either with systemd-networkd directly or with netplan, but this is not done now.

OpenVSwitch management

Creating an OVS bridge with Netplan

Seapath supports the creation of an OVS bridge directly from a Netplan configuration file. To do so, you can use the Netplan configuration example file located in Seapath Ansible inventories directory.

In the following example, we create an OVS bridge connected to the physical interface pointed by ovs_ext_interface inventory variable, defined in the hypervisor inventory. All the network stream coming from this interface will be redirected to this bridge. This configuration can be useful in the case you want to share the network stream coming on one interface to multiple virtual port, and so virtual machines.

Code Block
languagetext
# Copyright (C) 2024 Savoir-faire Linux, Inc.
# SPDX-License-Identifier: Apache-2.0
network:
version: 2
renderer: networkd
openvswitch:
protocols: [OpenFlow13, OpenFlow14, OpenFlow15]
ethernets:
{{ ovs_ext_interface }}: {}
bridges:
ovs0:
interfaces: [{{ ovs_ext_interface }}]
openvswitch:
protocols: [OpenFlow10, OpenFlow11, OpenFlow12]

After applying the Netplan configuration using netplan apply command, you can identify the created OVS bridge (here ovs0) with command ip a:

Code Block
languagebash
root@minisforum:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
2: enp2s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 58:47:ca:72:49:50 brd ff:ff:ff:ff:ff:ff
3: enp3s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master ovs-system state UP group default qlen 1000
link/ether 58:47:ca:72:49:51 brd ff:ff:ff:ff:ff:ff
4: enx782d7e14367c: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master br0 state UP group default qlen 1000
link/ether 78:2d:7e:14:36:7c brd ff:ff:ff:ff:ff:ff
7: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether fa:f7:2f:2c:1b:b8 brd ff:ff:ff:ff:ff:ff
inet 192.168.216.74/24 brd 192.168.216.255 scope global br0
valid_lft forever preferred_lft forever
8: wlp4s0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 38:d5:7a:25:d4:2d brd ff:ff:ff:ff:ff:ff
9: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
link/ether 02:42:a2:71:7b:7e brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
10: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 1e:86:fb:0e:ff:7c brd ff:ff:ff:ff:ff:ff
11: ovs0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
link/ether 58:47:ca:72:49:51 brd ff:ff:ff:ff:ff:ff

You can also confirm the creation of the OVS bridge using ovs-vsctl show:

Code Block
languagebash
root@minisforum:~# ovs-vsctl show
db96c66c-2cfb-40d0-bbcb-10038eb1e6fb
Bridge ovs0
fail_mode: standalone
Port enp3s0
Interface enp3s0
Port ovs0
Interface ovs0
type: internal
ovs_version: "3.1.0"


Info

enp3s0 is the interface pointed by ovs_ext_interface.

Creating an OVS virtual port with Libvirt

Seapath supports also the creation of an OVS virtual port directly from the VM Libvirt XML configuration file. To do so, you can reuse the templated VM file guest.xml.j2 which supports this feature.

Code Block
languagexml
{% if vm.bridges is defined %}
{% for bridge in vm.bridges %}
<interface type="bridge">
<source bridge="{{ bridge.name }}"/>
<mac address="{{ bridge.mac_address }}"/>
<model type="virtio"/>
{% if bridge.type is defined %}
<virtualport type='{{ bridge.type }}'/>
{% endif %}
{% if bridge.vlan is defined %}
<vlan>
<tag id='{{ bridge.vlan.vlan_tag }}'/>
</vlan>
{% endif %}
</interface>
{% endfor %}
{% endif %}

Then, in your VM inventory, you have to add the OVS bridge in a bridge field and specifies the name, it MAC address, and finally the type of bridge to openvswitch. This type will tell to Libvirt that the selected bridge in an OpenVSwitch bridge, and so will add a virtual port connected to it. Based on the previous example, all network stream coming from the interface pointed by ovs_ext_interface will be redirected to this new virtual port.

Code Block
languagetext
bridges:
- name: "ovs0" 
mac_address: "58:47:ca:72:49:51"
type: openvswitch

After VM creation and starting this new virtual port will appeared on hypervisor side (here vnet0) with command ip a:

Code Block
languagebash
root@minisforum:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
2: enp2s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 58:47:ca:72:49:50 brd ff:ff:ff:ff:ff:ff
3: enp3s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master ovs-system state UP group default qlen 1000
link/ether 58:47:ca:72:49:51 brd ff:ff:ff:ff:ff:ff
4: enx782d7e14367c: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master br0 state UP group default qlen 1000
link/ether 78:2d:7e:14:36:7c brd ff:ff:ff:ff:ff:ff
7: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether fa:f7:2f:2c:1b:b8 brd ff:ff:ff:ff:ff:ff
inet 192.168.216.74/24 brd 192.168.216.255 scope global br0
valid_lft forever preferred_lft forever
8: wlp4s0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 38:d5:7a:25:d4:2d brd ff:ff:ff:ff:ff:ff
9: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
link/ether 02:42:a2:71:7b:7e brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
10: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 1e:86:fb:0e:ff:7c brd ff:ff:ff:ff:ff:ff
11: ovs0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
link/ether 58:47:ca:72:49:51 brd ff:ff:ff:ff:ff:ff
12: vnet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master ovs-system state UNKNOWN group default qlen 1000
link/ether fe:47:ca:72:49:51 brd ff:ff:ff:ff:ff:ff

You may need to isolate the network stream for a specific virtual port, and so create a specific VLAN for it. To do so, you can add in the VM inventory bridge field a VLAN field such as follow:

Code Block
languagetext
bridges:
- name: "ovs0" # Change the bridge name
mac_address: "58:47:ca:72:49:51" 
type: openvswitch
vlan: 100

In this example, the vnet0 OVS virtual port will be isolated on VLAN 100.