Continuum is a deployment and benchmarking framework for the edge-cloud compute continuum. Continuum offers the following features:
- Continuum automates the creation of a cluster of cloud, edge, and endpoint virtual machines to emulate a compute continuum environment.
- Users can freely configure the specifications of the virtual machines and the network connecting them through a single configuration file.
- Continuum automatically installs operating services, resource managers, and applications inside the emulated cluster based on the user's preference. Supported operating services include MQTT, resource managers include Kubernetes, KubeEdge, and OpenFaaS, and applications include machine learning.
- Continuum can automatically benchmark the resource managers and applications installed in the emulated cluster, and report metrics and logs back to the user.
- Continuum is easily extendable, allowing users to add support for more infrastructure providers, operating services, resource managers, and applications.
Continuum supports the following software:
- Infrastructure: Virtual machine provisioning through QEMU/KVM on local bare-metal devices.
- Operating Services: Continuum can set up an MQTT broker on edge device for lightweight communication to endpoint users.
- Resource Manager: Continuum can deploy containerized applications via Docker and Containerd using the resource managers Kubernetes and KubeEdge. OpenFaaS is supported for deploying serverless functions.
- Applications and application back-ends: Continuum supports any application that can be deployed on VMs, containers, or serverless functions. As an example, a machine learning application is included.
Continuum has the following architecture:
The execution flow consists of three phases, each having a configuration and execution step. These phases are infrastructure deployment, software installation, and benchmarking. Each phase is optional, i.e., the framework can be used for infrastructure deployment without any pre-installed software if so desired.
- Infrastructure configuration: Libvirt configuration files for QEMU/KVM are created based on the user's preferences.
- Infrastructure execution: The configuration files are executed, creating QEMU/KVM virtual machines connected through network bridges.
- Software configuration: Ansible is configured for software installation based on the configured infrastructure.
- Software execution: Ansible playbooks are executed, installing operating services and resource management software on each machine. This step includes setting up resource management clusters such as Kubernetes.
- Benchmark configuration The benchmark is configured and prepared based on the user's preferences.
- Benchmark execution: Applications (encapsulated in containers) are executed using resource managers running on the emulated infrastructure (Kubernetes, KubeEdge, etc.). Meanwhile, application- and system-level metrics are captured, processed, and presented to the user.
When using Continuum for research, please cite the work as follows:
@inproceedings{2023-jansen-continuum,
author = {Matthijs Jansen and
Linus Wagner and
Animesh Trivedi and
Alexandru Iosup},
title = {Continuum: Automate Infrastructure Deployment and Benchmarking in the Compute Continuum},
booktitle = {Proceedings of the First FastContinuum Workshop, in conjuncrtion with ICPE, Coimbra, Portugal, April, 2023},
year = {2023},
doi = {},
url = {https://atlarge-research.com/pdfs/2023-fastcontinuum-continuum.pdf},
}
Other work on the Continuum framework includes:
@inproceedings{2023-jansen-refarch,
author = {Matthijs Jansen and
Auday Al-Duilamy and
Allesandro Vittorio Papadopoulos and
Animesh Trivedi and
Alexandru Iosup},
title = {The {SPEC-RG} Reference Architecture for the Compute Continuum},
booktitle = {The 23rd IEEE/ACM International Symposium on Cluster, Cloud and Internet Computing, CCGRID 2023, Bangalore, India, May 1-4, 2023},
year = {2023},
doi = {},
url = {https://atlarge-research.com/pdfs/2023-ccgrid-refarch.pdf},
}
This work is funded by NWO TOP OffSense (OCENW.KLEIN.209).
Continuum has integrated support for Prometheus and Grafana on top of Kubernetes and OpenFaas.
Continuum will automatically install these software packages and configure them when using observability = True
in the configuration file, see configuration/template.cfg
.
After Continuum has finished, you can use your browser to open the Grafana dashboard using localhost:3000
and Prometheus using localhost:9090
.
The Grafana dashboard requires a username and password, both are admin
by default.
In case you run Continuum on a machine without a graphical user interface, connect to the machine from a device with one, and port-forward the 3000 and 9090 ports.
For example, to port-forward the 3000 port, use ssh -L 3000:XXX.XXX.XXX.XXX:3000 username@address -i /path/to/ssh_key
, with XXX.XXX.XXX.XXX the IP of the cloud controller VM that is printed after Continuum has finished (typically 192.168.100.2), username@address the IP address of the server you can Continuum on and the username of your account on the server, and the corresponding SSH key.
This demo requires a single machine and a Linux operating system that supports QEMU/KVM and Libvirt. The demo contains three parts:
- Prepare the environment
- Install the framework
- Use the framework
In the first part, we prepare an Ubuntu 20.04 virtual machine using QEMU/KVM. In part two, we install the Continuum framework inside this VM and finally use the framework in part 3. If you have access to a machine with Ubuntu 20.04, you can skip part 1, "Prepare the environment", and start with part 2. Continuum has been tested on Ubuntu 20.04, and correct functioning on other operating systems can not be guaranteed.
If you want to use Continuum for research, you should install it directly on your machine, without using a virtual machine, as this reduces performance. The framework does support execution on multiple physical machines through a network bridge. We leave this multi-machine execution out of this tutorial; consult the documentation for more information.
Software versions tested:
- QEMU 6.1.0
- Libvirt 6.0.0
- Docker 20.10.12
- Python 3.8.10
- Ansible 2.13.2
We prepare a virtual machine with Ubuntu 20.04 in this step. The only requirement for this part is installing QEMU/KVM and Libvirt. You can execute this part on any operating system that supports these software packages; our demo focuses on Ubuntu 20.04.
- Install requirements
- Install QEMU, KVM, and Libvirt:
sudo apt update && sudo apt install qemu-kvm libvirt-daemon-system libvirt-clients bridge-utils
- Give permissions to LibVirt and KVM to run virtual machines (use your own username):
sudo adduser [username] libvirt &&
sudo adduser [username] kvm
. You may need to log in and out to refresh your group memberships. - Check if the installation was successful:
qemu-system-x86_64 --version
- Check if libvirt is running:
sudo systemctl status libvirtd
. If not, activate it usingsudo systemctl enable --now libvirtd
- Install QEMU, KVM, and Libvirt:
- Download the Ubuntu 20.04 server image:
wget https://releases.ubuntu.com/20.04.3/ubuntu-20.04.3-live-server-amd64.iso
- Create a QCOW disk as storage for your VM:
qemu-img create -f qcow2 ubuntu.img 20G
- At least 20 GB of disk space is required for this tutorial
- Boot the VM
- On a system with a GUI:
sudo qemu-system-x86_64 -hda ubuntu.img --enable-kvm -m 8G -smp 4 -boot d -cdrom ubuntu-20.04.3-live-server-amd64.iso -cpu host -net nic -net user
- This should automatically open up a new window for the VM.
- Memory requirements: At least 4 GB (in this example -m 8G = 8 GB)
- CPU requirements: At least 4 (in this example -smp 4 = 4 CPUs)
- On a system without a GUI:
sudo qemu-system-x86_64 -hda ubuntu.img --enable-kvm -m 8G -smp 4 -boot d -cdrom ubuntu-20.04.3-live-server-amd64.iso -cpu host -net nic -net user,hostfwd=tcp::7777-:22
- Open up a new SSH session into the GUI-less machine using
ssh -X
. The machine that you are SSH’ing from should have a GUI. - Install Remmina on the GUI-less machine:
sudo apt install remmina
and run Remminaremmina
- This should open the Remmina screen for you. Click on the + icon to create a new connection. Under protocol, select “VNC”, and then under server, add the VNC address displayed in the terminal where you started the VM (for example, 127.0.0.1:5900). Click save and connect to connect to the VM.
- Open up a new SSH session into the GUI-less machine using
- On a system with a GUI:
- Initialize the VM: Do not forget to install the open-SSH client during the installation! Remember the username and password you create for later. You can ignore all (security) updates for this demo.
- Shut the VM down once the initial setup is done, and launch again:
sudo qemu-system-x86_64 -hda ubuntu.img --enable-kvm -m 8G -smp 4 -cpu host -net nic -net user,hostfwd=tcp::8888-:22 --name ubuntu
- On a system with a GUI: A new screen should automatically open, and after some time the VM will be done booting up. If you don’t want to use a GUI for the VM, open up a new terminal and use
ssh [username]@localhost -p 8888
- On a system without a GUI: Open up a new terminal and use
ssh [username]@localhost -p 8888
- On a system with a GUI: A new screen should automatically open, and after some time the VM will be done booting up. If you don’t want to use a GUI for the VM, open up a new terminal and use
We start installing all requirements for the Continuum framework. We assume the operating system is Ubuntu 20.04, either natively or via a VM.
-
Install VM requirements
- Install QEMU, KVM and Libvirt:
sudo apt update && sudo apt install qemu-kvm libvirt-daemon-system libvirt-clients bridge-utils
- Give permissions to LibVirt and KVM to run virtual machines (use your own username):
sudo adduser [username] libvirt
andsudo adduser [username] kvm
. You may need to log in and out to refresh your group memberships. - Check if the installation was successful:
qemu-system-x86_64 --version
. - Check if libvirt is running:
sudo systemctl status libvirtd
. If not, activate it usingsudo systemctl enable --now libvirtd
- Install QEMU, KVM and Libvirt:
-
Install Docker (see Docker website for alternative instructions)
sudo apt-get install \ ca-certificates \ curl \ gnupg \ lsb-release sudo mkdir -p /etc/apt/keyrings curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg echo \ "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \ $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null sudo apt-get update sudo apt-get install docker-ce docker-ce-cli containerd.io docker-compose-plugin sudo groupadd docker sudo usermod -aG docker $USER sudo systemctl enable docker.service sudo systemctl enable containerd.service # Now refresh you SSH session by logging in / out # support https hostname -I # copy first IP in list, paste in next command under IP_HERE echo '{ "insecure-registries":["IP_HERE:5000"] }' | sudo tee -a /etc/docker/daemon.json sudo systemctl restart docker
-
Get Pip:
sudo apt install python3-pip
-
Get Ansible:
sudo apt install ansible
- Check if Ansible works:
ansible --version
- Edit the ansible configuration:
sudo vim /etc/ansible/ansible.cfg
- Under
[ssh_connection]
, addretries = 5
- Under
[defaults]
, addcallback_whitelist = profile_tasks
- Under
[defaults]
, addcommand_warnings=False
- Under
- Check if Ansible works:
-
Install the Continuum repository
git clone https://github.com/atlarge-research/continuum.git
- Get python requirements:
cd continuum && pip3 install -r requirements.txt
- Create an .ssh directory:
mkdir ~/.ssh
- Create a known hosts file:
touch ~/.ssh/known_hosts
-
Delete the virtual bridge
virsh net-destroy default
andvirsh net-undefine default
- Check that virbr0 no longer exists:
virsh net-list --all
-
Create a network bridge
-
Make a backup of the current network configuration:
sudo cp /etc/netplan/00-installer-config.yaml /etc/netplan/00-installer-config.yaml.bak
-
Edit the network configuration to create a bridge (
sudo vim /etc/netplan/00-installer-config.yaml
). Useip a
to get your machine’s network interface (e.g., ens3, enp0s3) and IP (for this example, the IP listed under ens3) andip r
to get the gateway address (the first IP on the first line). An example file could look like this:network: ethernets: ens3: dhcp4: false dhcp6: false bridges: br0: interfaces: [ens3] addresses: [10.0.2.15/16] gateway4: 10.0.2.2 nameservers: addresses: [1.1.1.1, 8.8.8.8] search: [] parameters: stp: true dhcp4: false dhcp6: false version: 2
-
Enforce this new network policy with
sudo netplan generate
andsudo netplan apply
-
Use
brctl show
to check that bridge br0 now exists, andip a
to check that ens3 does not have a listed ip anymore, but br0 does instead. -
If your ip listed under “addresses” does not start with 192.168, one change in the framework is required: Edit continuum.py (vim continuum.py), search for the “add_constants” function and change config[”prefixIP”] to your prefix (e.g., for this example “10.0”
-
Enable IP forwarding from VMs to the bridge
# This is one command # If permission denied, execute "sudo su" first. cat >> /etc/sysctl.conf <<EOF net.bridge.bridge-nf-call-ip6tables = 0 net.bridge.bridge-nf-call-iptables = 0 net.bridge.bridge-nf-call-arptables = 0 EOF # If sudo su was used, do "exit" now # Then execute this command sudo sysctl -p /etc/sysctl.conf
-
Inside the continuum framework:
- Check the input parameters of the framework:
python3 continuum.py -h
. - The configuration files are stored in /configuration. Check /configuration/template.cfg for the template that these configuration files follow.
- Run one of these configurations, such as a simple edge computing benchmark:
python3 continuum.py -v configuration/bench_edge.cfg
- If the program executes correctly, the results will be printed at the end, as well as the ssh commands needed to log into the created VMs.
In this part, you will setup OpenFaaS, a serverless framework, in the Kubernetes cluster that Continuum
created for you.
For the moment, we only allow OpenFaaS to be installed outside of the framework. In the future, we will integrate it in the framework.
-
Run Continuum with a configuration for OpenFaas. The
resource_manager_only = true
flag andmodel = openfaas
in sectionexecution_model
is critical here.python3 continuum.py configuration/bench_openfaas.cfg
-
From your host-system ssh onto the
cloud_controller
node, for example:ssh [email protected] -i ~/.ssh/id_rsa_continuum
-
On the
cloudcontroller
make port 8080 from the Kubernetes cluster available on the node:nohup kubectl port-forward -n openfaas svc/gateway 8080:8080 &
After execution, hit
Strg+C
to exit the dialog. -
Give the
fass-cli
access to the OpenFaas deployment:PASSWORD=$(kubectl get secret -n openfaas basic-auth -o jsonpath="{.data.basic-auth-password}" | base64 --decode; echo) echo -n $PASSWORD | faas-cli login --username admin --password-stdin
Congratulations! As long as you don't reset the cluster, you can now access the OpenFaas deployment through the cloudcontroller
node and faas-cli
.
You can test your installation by deploying and running a simple function, figlet. Figlet echos its input back to the user as an ASCII-banner.
For now, we will use the command line to deploy the function. For a real-world scenario, this might not be desireable and you should use a yaml file to do your deployments like Johnny does in his tutorial. Why is that?
Deploy figlet to OpenFaaS
faas-cli store deploy figlet
If everthing went well, you should now see it in the list of functions:
faas-cli list
Now it's time to execute your first serverless function:
curl http://localhost:8080/function/figlet -d 'Hello world!'
Please read the documentation in /docs when encountering issues during the installation or usage of the framework.