Yocto Installer
The compilation and installation of the Yocto version of SEAPATH is entirely described on the GitHub repository seapath/yocto-bsp.
Debian Installer
To install the cluster, you need to generate an ISO, based on Debian 11, for each host with this repository here.
To generate the ISO in the repository, you should launch these below commands:
cp srv_fai_config/class/SEAPATH.var.defaults srv_fai_config/class/SEAPATH.var $EDITOR srv_fai_config/class/SEAPATH.var ./build_iso.sh
See the below section for more details on the configuration file.
Configuration
In the configuration file, you must define these variables:
FAI_ALLOW_UNSIGNED
: Boolean to allow installation of packages from unsigned repositories (0 => true)UTC
: Boolean to set the system clock to UTC (possible values: yes or no)TIMEZONE
: Time to chooseKEYMAP
: Keyboard translation to chooseROOTPW
: Crypted password for rootSTOP_ON_ERROR
: TODOMAXPACKAGES
: TODOusername
: ID of the user account to be createdUSERPW
: Crypted password for the user account to be createdusernameansible
: ID of the ansible account to be createdmyrootkey
: TODOmyuserkey
: TODOansiblekey
: TODOapt_cdn
: TODOREMOTENIC
: Network interface to be setREMOTEADDR
: IP address to be set onREMOTENIC
with the maskREMOTEGW
: IP address for the gateway to be set onREMOTENIC
However, all host will be with the same IP address.
Disks
The disk is composed:
- (If the installation is in UEFI) EFI partition in
/boot/efi
with VFAT filesystem (512 MB). - Boot partition in
/boot
with ext4 filesystem (500 B). - Main partition with LVM configuration (30 GB). This partition is divided into 3 parts:
- Root partition in
/
with ext4 filesystem (7 GB). - Log partition in
/var/log
with ext4 filesystem (1 GB). - Swap partition (500 B).
- Root partition in
This can be changed in the build_debian_iso/srv_fai_config/disk_config/
directory. There is always 2 versions (one in Legacy BIOS and an other in UEFI mode with the suffix "_EFI
").
Prerequisite
When the host is installed, the ansible/playbooks/cluster_setup_prerequisdebian.yaml
need to launch to finish the installation.
The inventory must define these variables to run the playbook:
admin_user
: Default user with admin privilegesadmin_passwd
: Password hash (optional)admin_ssh_keys
: (optional)apply_network_config
: Boolean to apply the network configurationadmin_ip_addr
: IP address for SNMPcpumachinesnort
: Range of allowed CPUs for no RT machinescpumachines
: Range of allowed CPUs for machines (RT and no RT)cpumachinesrt
: Range of allowed CPUs for RT machinescpuovs
: Range of allowed CPUs for OpenVSwitchcpusystem
: Range of allowed CPUs for the systemcpuuser
: Range of allowed CPUs for the userirqmask
: Set theIRQBALANCE_BANNED_CPUS
environment variable, seeirqbalance
manuallivemigration_user
:logstash_server_ip
: IP address forlogstash-seapath
alias in/etc/hosts
main_disk
: Main disk device to observe his temperatureworkqueuemask
: The negation of theirqmask
(= ~irqmask
)
In this part, the playbook define the scheduling and the prioritization (see the section).
Playbook's tasks
Hardening
The ansible/playbooks/cluster_setup_hardening_debian.yaml
playbook enables system hardening and the ansible/playbooks/cluster_setup_unhardening_debian.yaml
playbook disables it.
The hardened elements are:
- the kernel with the parameters of the command line (see below section), the sysfs and modules;
- the GRUB;
- the systemd services;
- adding of bash profiles;
- SSH server;
- adding of
sudo
rules; - the shadow password suite configuration;
- the secure tty;
- the audit daemon.
Kernel
The project uses a real-time kernel, the Linux kernel with the PREEMPT_RT patch. So, he needs to have some parameters as:
cpufreq.default_governor=performance
: Use theperformance
governor by default (more details here).hugepagesz=1G
: Uses1
giga-bytes for HugeTLB pages (more details here).intel_pstate=disable
: Disables theintel_pstate
as the default scaling driver for supported processors (more details here).isolcpus=nohz,domain,managed_irq
:nohz
to disable the tick when a single task runs;domain
to isolate from the general SMP balancing and scheduling algorithms;managed_irq
to isolate from being targeted by managed. See the Scheduling and priorization section.no_debug_object
: Disables object debugging.nosoftlockup
: Disable the soft-lockup detector (more details here).processors.max_cstate=1
andintel_idle.max_cstate=1
: Discards of all the idle states deeper than idle state1
, for theacpi_idle
andintel_idle
drivers, respectively (more details here).rcu_nocbs
: See the Scheduling and priorization section.rcu_nocb_poll
: Make the kthreads poll for callbacks.rcutree.kthread_prio=10
: Set the SCHED_FIFO priority of the RCU per-CPU kthreads.skew_tick=1
: Helps to smooth jitter on systems with latency-sensitive applications running.tsc=reliable
: Disables clocksource verification at runtime, as well as the stability checks done at bootup.
In the hardening system, the kernel has these parameters:
init_on_alloc=1
: Fill newly allocated pages and heap objects with zeroes.init_on_free=1
: Fill freed pages and heap objects with zeroes.slab_nomerge
: Disable merging of slabs with similar size.pti=on
: Enable the control Page Table Isolation of user and kernel address spaces.slub_debug=ZF
: Enable red zoning (Z
) and zanity checks (F
) on for all slabs (more details here).randomize_kstack_offset=on
: Enable kernel stack offset randomization.slab_common.usercopy_fallback=N
:iommu=pt
: Get best performance using the SR-IOV (TODO).security=yama
: Use theyama
security module to enable at boot.mce=0
: Disables the time in us to wait for other CPUs on machine checks.rng_core.default_quality=500
: Set the value of the entropy for the system.lsm=apparmor,lockdown,capability,landlock,yama,bpf
: Set the order of LSM initialization.
More details on the kernel's parameters here.
Virtual cluster
On the host, you must set these sysctl settings:
net.bridge.bridge-nf-call-arptables = 0 net.bridge.bridge-nf-call-ip6tables = 0 net.bridge.bridge-nf-call-iptables = 0
You must define 3 network interfaces on each host of your cluster.
- One interface connects to a virtual network in NAT mode
- Two interfaces connect to two virtual networks with a MTU set to 9000 (it's to simulate an ethernet cable between two machines)