Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  • Bump SEAPATH Yocto distribution version

    • If not already done, bump python3-setup-ovs, python3-vm-manager and python3-svtrace in recipes-seapath:

      • SRCREV

      • Version in recipe file name for python3-setup-ovs and python3-vm-manager

  • Change the version in SEAPATH distro files for SEAPATH Yocto

  • Add vX.Y.Z.xml file in repo-manifest (pointing to vX.Y.Z repositories using commit ID)

  • Use this vX.Y.Z.xml file as default manifest

  • tag these repositories with vX.Y.Z:

    • ansible

    • build_debian_iso

    • yocto_bsp

    • meta-seapath

    • ansible-role-systemd-networkd

    • vm_manager

    • cockpit-*cluster-vm-management

    • cockpit-cluster-dashboard

    • cockpit-update

    • python3-setup-ovs

    • repo-manifest

    • snmp_passpersist

    • cukinia-tests

  • Revert the repo-manifest

  • Publish ansible code on ansible-galaxy with version v1vX.0Y.0Z

  • Write a wiki page with all the release note

  • Write one dedicated release note for the following repositories :

    • ansible

    • build_debian_iso

    • yocto_bsp

...

  •  Flasher *
    •  cqfd -b flasher
  •  Hypervisor cluster
    •  Release image * cqfd -b host_efi_swu
    •  Debug image → cqfd -b host_efi_dbg_swu
    •  Minimal image → cqfd -b host_efi_minimal
  •  Hypervisor standalone *
    •  cqfd run ./build.sh -v -i seapath-host-efi-swu-image --distro seapath-standalone-hostRelease image → cqfd -b host_standalone_efi_swu
    •  Debug image → cqfd -b host_standalone_efi_dbg
  •  Guest
    •  Release image * cqfd -b guest_efi
    •  Debug image → cqfd -b guest_efi_dbg
  •  Observer *
    •  cqfd -b observer_efi_swu
  • Create a SEAPATH flasher USB key.

  • Clone the revision that will be release of SEAPATH Ansible

  • Set up the Ansible inventory for

    • Standalone machine

    • Cluster

      • 2 hypervisors

      • 1 observer

      • 1 VM

  • Flash and validate a standalone hypervisor

    • Flash a machine with the built standalone release image using the flasher

    .
    • Boot the newly flashed machine, check that it is reachable from SSH

    .

    Ansible configuration → Use the revision that will be release of SEAPATH Ansible

    • Set up the Ansible inventory for the standalone test machine

    • Configure the standalone hypervisor: cqfd run ansible-playbook --limit standalone_machine -i <inventory-

    skip-tags "package-install"
    • to-use> playbooks/seapath_setup_main.yaml

    • Check that cukinia tests succeed : sudo cukinia

  • Deploy and validate SEAPATH guest VM on a standalone machine

    • Using the SEAPATH guest VM previously built, deploy it in the cluster

      • cqfd run ansible-playbook playbooks/deploy_vms_standalone.yaml -i <path-to-inventory>

    • Open a console in the VM (vm-mgr console or via SSH)

    • Check that cukinia tests succeed: sudo cukinia

  • Flash and validate cluster hypvervisors and observer

    Deploy a VM in a cluster
    • Flash a SEAPATH cluster: 2 hypervisors and 1 observer

    • Boot the newly flashed machine, check that it is reachable from SSH

    • Run Ansible setup on the SEAPATH cluster

      • Set up the Ansible inventory for the cluster

      • Configure the cluster cqfd run ansible-playbook --limit cluster_machines

      --skip-tags "package-install"
      • playbooks/seapath_setup_main.yaml

  • Run Cukinia tests

    • Run Cukinia’s test on a standalone hypervisor

    • Run Cukinia’s test on a hypervisor inside a cluster

    • Run Cukinia’s test on an observer

    • Run Cukinia’s test on a VM

  • Deploy a standalone VM

    • Check that cukinia tests succeed on hypervisors and observer : sudo cukinia

  • Deploy and validate SEAPATH guest VM in a cluster

    • Using the SEAPATH guest VM previously built, deploy it in the cluster

      • cqfd run ansible-playbook playbooks/deploy_vms_cluster.yaml -i <path-to-inventory>

    • Open a console in the VM (vm-mgr console or via SSH)

    • Check that cukinia tests succeed: sudo cukinia

    • Test VM migration

      • sudo crm ressource move <vm-name> <destination-node>

Debian

TODO