Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 3 Next »

You want to deploy your application on SEAPATH within a Virtual Machine (VM), you have 2 options:

  1. Use a generic SEAPATH VM

  2. Use your own VM

SEAPATH uses QEMU and Libvirt to deploy virtual machines. With this technology, two elements are needed:

  • A libvirt XML configuration file: This file describes all the devices needed by the VM. It can be memory, cpu information, interfaces ...

  • A VM disk image: This file represents the disk of the virtual machine, it will contain the kernel as well as the filesystem. Common format for Linux based VM is qcow2

SEAPATH use Ansible and vm_manager to deploy VM on SEAPATH hypervisor:

  • A deployment Ansible playbook is available there

    • More documentation can be found there

  • In Ansible inventories you can specify:

    • The libvirt XML file used

      • Several libvirt xml templates are provided on SEAPATH. They allow you to configure your VM with different options such as real-time capabilities or non real-time capabilities

      • Detailed documentation can be found there.

    • The disk image (qcow2) used, with 2 options

      • Based on generic SEAPATH VM

      • Using your own disk image (ex: Windows, Ubuntu…etc)

Use a generic SEAPATH VM

If you need a virtual machine for testing or for deploying your application, you can use generic SEAPATH VM which contains:

  • All applicable cysersecurity policies and tests applied on SEAPATH hypervisor

  • Minimal applications installed

Information on how to build a disk image in detail is available here for a Yocto VM, or here for a Debian VM.

Use your own VM

The supported disk image on SEAPATH is qcow2 used by qemu.

It is possible to convert alternative disk image format to qcow2 thanks to qemu-img. Refer to qemu documentation.

Virtual machine and slices

If you configured SEAPATH with machine-rt and machine-nort slices (see Scheduling and priorities), you must put the VM into it. This is done using libvirt resources. The VM will then only have access to the CPU associated with the slice.

Possible values:

  • /machine/nort

  • /machine/rt

Example, for a virtual machine with the real-time:

<resource>
    <partition>/machine/rt</partition>
</resource>

CPU tuning

In the project, this element will be used to limit the virtual machine (more details here).

  • The vcpupin element specifies which host's physical CPUs the domain vCPU will be pinned to. Use this value only for performance driven VMs. 

  • The vcpusched element specifies the scheduler type for a particular vCPU.  Use this value only for performance driven VMs.

  • The emulatorpin element specifies which of host physical CPUs the emulator will be pinned to. The emulator describes the management processes of the VM (watchdog, creation, timers, modification). This value is not useful, most of the time.

If you configured CPU restriction for slices (see Scheduling and priorities), all the CPU ranges that you provide for emulatorpin and vcpupin must be part of the allowed CPUs of the slice where the VM is. By default, the VM is part of the machine slice, but it can be in the machine-rt or machine-nort slice following your configuration.

If you deploy a machine with real time vcpu scheduler, you must set the emulatorpin cpu range.  It can be set on the system cpu range or on specific cores.
Remember that, for maximal performance, each vCPU must be scheduled alone on its core. Emulatorpin must then be set on another core.

  • No labels