Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  • Go to the project settings

  • In the left menu, select "Rulesets" in "Rules"

  • Click on the "New ruleset" button, then "Import a ruleset"

  • Select the "branch-default-ruleset.json"

  • On the new screen go to the end, click on the button "Create"

Release procedure

For a new vX.Y.Z release:

  • Bump SEAPATH Yocto distribution version

    • If not already done, bump python3-setup-ovs, python3-vm-manager and python3-svtrace in recipes-seapath:

      • SRCREV

      • Version in recipe file name for python3-setup-ovs and python3-vm-manager

  • Change the version in SEAPATH distro files for SEAPATH Yocto

  • Add v1vX.0Y.0Z.xml file in repo-manifest (pointing to v1vX.0Y.0 Z repositories using commit ID)

  • Use this v1vX.0Y.0Z.xml file as default manifest

  • tag these repositories with v1vX.0Y.0Z:

    • ansible

    • build_debian_iso

    • yocto_bsp

    • meta-seapath

    • ansible-role-systemd-networkd

    • vm_manager

    • cockpit-cluster-vm-management

    • cockpit-cluster-dashboard

    • cockpit-*update

    • python3-setup-ovs

    • repo-manifest

    • snmp_passpersist

    • cukinia-tests

  • Revert the repo-manifest

  • Publish ansible code on ansible-galaxy with version v1vX.0Y.0Z

  • Write a wiki page with all the release note

  • Write one dedicated release note for the following repositories :

    • ansible

    • build_debian_iso

    • yocto_bsp

...

  •  Flasher *
    •  cqfd -b flasher
  •  Hypervisor cluster
    •  Release image * cqfd -b host_efi_swu
    •  Debug image → cqfd -b host_efi_dbg_swu
    •  Minimal image → cqfd -b host_efi_minimal
  •  Hypervisor standalone *
    •  cqfd run ./build.sh -v -i seapath-host-efi-swu-image --distro seapath-standalone-hostRelease image → cqfd -b host_standalone_efi_swu
    •  Debug image → cqfd -b host_standalone_efi_dbg
  •  Guest
    •  Release image * cqfd -b guest_efi
    •  Debug image → cqfd -b guest_efi_dbg
  •  Observer *
    •  cqfd -b observer_efi_swu
     
  • Create a SEAPATH flasher USB key.

...

  • Clone the revision that will be release of SEAPATH Ansible

  • Set up the Ansible inventory for

    • Standalone machine

    • Cluster

      • 2 hypervisors

      • 1 observer

      • 1 VM

  • Flash and validate a standalone hypervisor

...

    • Flash a machine with the built standalone release image using the flasher

...

    • Boot the newly flashed machine, check that it is reachable from SSH

...

    • Configure the standalone hypervisor: cqfd run ansible-playbook --limit standalone_machine -i <inventory-

...

    • to-use> playbooks/seapath_setup_main.yaml

...

    • Check that cukinia tests succeed : sudo cukinia

  • Deploy and validate SEAPATH guest VM on a standalone machine

    • Using the SEAPATH guest VM previously built, deploy it in the cluster

      • cqfd run ansible-playbook playbooks/deploy_vms_standalone.yaml -i <path-to-inventory>

    • Open a console in the VM (vm-mgr console or via SSH)

    • Check that cukinia tests succeed: sudo cukinia

  • Flash and validate cluster hypvervisors and observer

    • Flash a SEAPATH cluster: 2 hypervisors and 1 observer

...

    • Boot the newly flashed machine, check that it is reachable from SSH

    • Run Ansible setup on the SEAPATH cluster

...

      • Set up the Ansible inventory for the cluster

...

      • Configure the cluster cqfd run ansible-playbook --limit cluster_machines

...

      • playbooks/seapath_setup_main.yaml

...

  •  Run Cukinia’s test on a standalone hypervisor
  •  Run Cukinia’s test on a hypervisor inside a cluster
  •  Run Cukinia’s test on an observer
  •  Run Cukinia’s test on a VM

...

    • Check that cukinia tests succeed on hypervisors and observer : sudo cukinia

  • Deploy and validate SEAPATH guest VM in a cluster

    • Using the SEAPATH guest VM previously built, deploy it in the cluster

      • cqfd run ansible-playbook playbooks/deploy_vms_cluster.yaml -i <path-to-inventory>

    • Open a console in the VM (vm-mgr console or via SSH)

    • Check that cukinia tests succeed: sudo cukinia

    • Test VM migration

      • sudo crm ressource move <vm-name> <destination-node>

Debian

TODO