Extending Openstack Shaker for VNF performance testing (Blog — 1)

Sirishagopigiri
12 min readApr 15, 2021

As we are advancing towards virtualization through Virtual Network Functions(VNFs) and Containerized Network Functions(CNFs) in a teleco-cloud world, it is also important that we match the performance of these Network Functions(NFs) to that of the Physical Network Functions(PNFs). To make this possible it is important that we monitor the performance of these network functions when launched on Virtualized platforms like Openstack. One of the most important parameter to monitor is network performance of these VNFs when launched in a Virtualized Infrastructure Manager(VIM) like Openstack.

We have Shaker which is an opensource project from openstack foundation. It can be used to monitor the throughput and response times of the openstack VMs which would in turn mark the performance of the tenant networks in openstack setup. As quoted here, Shaker is

The distributed data-plane testing tool built for OpenStack.

So the openstack shaker project built to test tenant network performances can be extended to test the Virtual Machines(VMs) or Virtual Network Functions(VNFs) interface performance. To do so we change the network routes in the minion and primary VMs launched by shaker to router the traffic through the VNFs while running the performance test. Here we are simply replacing the openstack inbuilt router with an external VM or VNF when compared in the shaker architecture. In this way, we can simply extend it to test the performance of VNFs. For now we are replacing it with a single VNF but it can also be extended to multiple VNFs as long as the traffic flow happens between primary and minion VMs launched by shaker.

In these blog series we discuss how to extend shaker to test the interface performance of the router VNF in an openstack setup. We will we start with a simple Ubuntu VM with forwarding capability enabled which will serve as a simple VNF, next we replace it with VyOS VM which acts as a router. In the end we will also move the VyOS VM across hypervisors to see how the traffic throughput and response times will be effected. So this would be a three series blog covering the above discussed items.

Here is a basic idea on how to use shaker for VNF performance test cases execution.

Basic idea of how to use shaker

This will be a three series blog, in which first we will discuss on the step and configuration steps using ubuntu VNF. Next we would replace ubuntu VM with a VyOS VNF. Lastly we will move the VNF across hypervisor to observe how the throughput, response times and latency are effected.

In this current blog we will walk through the setup and configuration steps to test the VNF performance using openstack shaker. For this blog the VNF will be a simple ubuntu VM with ip forwarding capabilities enabled which will enable the VM to act as router. We will use this external VM as router instead of the inbuilt openstack router to route the shaker performance traffic between the minion and primary VMs.

The Openstack setup details that are used for this test cases are as listed below.

  • 2 Machines — 20.04 Ubuntu OS
  • Devstack stable/victoria
  • One controller+compute and the other only compute node
  • 1 Controller+Compute — 8vcpus, 16GB RAM
  • 1 Compute — 2vcpus, 4GB RAM

Please note, for this current blog we will be using only one compute node, so all the VMs are launched on the same node.

Prerequisites

  • Pre-installed openstack setup. Here are sample local.conf files that was used in this case.
  • Ubuntu image — uploaded to glance — select the image appropriate to your requirements(https://cloud-images.ubuntu.com/) — in the test case bionic-server-cloudimg-amd64.img is used

Shaker Setup

Once openstack setup is up and configured with required glance image, next step is to install shaker and build shaker image.

Shaker installation and configuration

Installing shaker using python pip

# pip install --user pyshaker

More info can be found here

Building Shaker base image

Shaker image can be build either using shaker image builder tool or diskimage-builder tool. Here is how to use the former

# shaker-image-builder

More information can be found here.

To use the diskimage-builder tool to build image please refer here.

Shaker flavor creation

The shaker VMs primary and minion require a defined shaker flavor in openstack setup, here is the command to create one.

# openstack flavor create --ram 512 --disk 3 --vcpus 1 shaker-flavor

Testcase Topology Setup

Now that the pre-requisites and shaker setup is complete, we can now proceed with the creation of openstack resources for our test case.

Please note: All the resources and image creation are done in the admin tenant and using admin user in openstack.

Network Configuration

First we need to create the networks required for our use case. We would require 3 networks to test our use case,

  • One Management network — used for cloud-init configuration by shaker and for other configuration purposes
  • East network — used for connection between test_primary VM to the router(Ubuntu VM)
  • West network — connection between test_minion VM to the router(Ubuntu VM)
Topology of the VNF testcases executed using Shaker

In our scenario, we will route the traffic from east to west network and vice versa through the router(Ubuntu VM). This use case is similar to the l3_east_west scenario supported by shaker, with the only difference being instead of using openstack in-built router, we are replacing it with a Ubuntu VM which will act as a packet forwarder(simple router).

To create the networks we can either use the dashboard or openstack cli.

Management network creation

# openstack network create mgmt

# openstack subnet create mgmt_subnet --network mgmt --subnet-range 20.0.0.0/24

East network creation

# openstack network create east

# openstack subnet create east_subnet --network east --subnet-range 30.0.0.0/24

West network creation

# openstack network create west

# openstack subnet create west_subnet --network west --subnet-range 40.0.0.0/24

Once created here is how the network details are listed in the dashboard

Network details in openstack

Router configuration

For shaker to perform cloud-init configuration on minion and primary VMs, they should have connectivity to the openstack q-meta service. So for that reason we create an openstack router and attach the management network as interface. The public network is set as gateway to the router.

# openstack router create router

# neutron router-gateway-set router public

# openstack router add subnet router mgmt_subnet

Here is the topology view in openstack after network configuration.

Network topology in openstack

Neutron Network Port Configuration

In openstack to enable other subnet address packets through the neutron network ports, we would need to enable allowed_address_pairs on those particular ports. In our case since the Ubuntu VM acts like a router forwarding packets from one subnet to other subnet, the neutron ports attached to the VM should have allowed_address_pairs configuration enable. So for this reason we create those neutron ports prior and enable the allowed address pairs configuration. Later when launching the VM, we will use these ports.

We now create one port in east and west subnets and then enable the allowed address pairs configuration. Below are the commands, this can be done using horizon as well.

# openstack port create east_router --fixed-ip ip-address=30.0.0.10 --network east --allowed-address ip-address=0.0.0.0/0

# openstack port create west_router --fixed-ip ip-address=40.0.0.10 --network west --allowed-address ip-address=0.0.0.0/0

This completes the network and port configurations required for our use case.

Virtual Machine Configuration

Next step is to launch the Ubuntu VM and perform the required configuration.

Before going forward with the VM configuration details, here is the output of the hypervisors in the current openstack setup. We are launching all the 3 VMs(Ubuntu VM, shaker minion & primary VMs) on the controller+compute node, which the harrypotter hypervisor. We have disabled the other compute node so that all the VMs are launched in a single node. This is just for experiment purpose, you can use all the compute nodes if required.

Hypervisor view in the openstack setup

The Ubuntu VM require interface configuration and packet forwarding capability enabled after booting up, we will use cloud-init for this purpose. Below is the cloud-init script passed as user-data argument when booting up the VM. Alternatively the user can also login to the VM and perform the configuration post boot up(provided if ssh keys are registered or if username/password are known). Here is a copy of the script.

### test.sh script
#!/bin/bash
sudo dhclient ens4
sudo dhclient ens5
sudo ifconfig ens4 up
sudo ifconfig ens5 up
echo 1 > /proc/sys/net/ipv4/ip_forward

Now to launch the VM we are using openstack cli, the same can be replicated using horizon.

# openstack server create --user-data test.sh --image ubuntu --flavor m1.medium --nic net-id=<mgmt_net_id> --nic port-id=<east_router_port_id> --nic port-id=<west_router_port_id> ubuntu_router

The arguments used here are:

  • test.sh — user data script can be found in the github repo, it brings up the interfaces and enable forwarding on the ubuntu VM
  • east_router_port_id — port id of the east_router port created above
  • west_router_port_id — port id of the west_router port created above
  • image ubuntu — Ubuntu image name or id already uploaded to glance
Ubuntu VM in openstack setup

Once we have the VM up and running, the interfaces would be configured using cloud-init script. We can verify the same using a simple ping test.

# sudo ip netns qdhcp-<net_id> ping <interface_ip>

Replace net_idwith mgmt, east and west network ids along with the corresponding ips attached to the VM in interface_ip fields. Here is a sample screen shot. Check that the icmp security rules are enabled on the ports before ping test.

Ping test to check the VM status

Shaker Test case execution

Now that we have the Ubuntu VM in place, we will proceed with the creation of the minion and primary VMs through Shaker and connect them to the Ubuntu VM, so that the traffic test performed using shaker marks the network performance of the Ubuntu VM in an openstack setup.

Below are the shaker templates files required for the test case execution. In the HOT template we have specified to launch minion and primary VMs and configure them through cloud-init so that they are connected to through the Ubuntu VM. Please note that the east_router and west_router port IPs are used in the cloud-init part which confirms that the gateway for the minion and primary VM interfaces are in turn the interfaces attached to the Ubuntu VM. Copy of the files can be found here

Please update the mgmt_net_id, east and west network and subnet ids in the below HOT template before execution. Save the below content to ubuntu_test.hotfile.

heat_template_version: 2013-05-23description:
This Heat template creates a pair of networks plugged into the same router.
Primary instances and minion instances are connected into different networks.
parameters:
net_id:
type: string
description: Network Id of management server
default: "e12265ed-68a0-45c8-8652-fd41b2594f57"
image:
type: string
description: Name of image to use for servers
flavor:
type: string
description: Flavor to use for servers
external_net:
type: string
description: ID or name of external network for which floating IP addresses will be allocated
server_endpoint:
type: string
description: Server endpoint address
dns_nameservers:
type: comma_delimited_list
description: DNS nameservers for the subnets
east_private_net:
type: string
description: NetId of east network
default: "4bdc7b4e-e1d9-4a36-9dc6-0dc3260b2313"
east_private_subnet:
type: string
description: SubnetId of east network
default: "cfa98799-f769-4965-861e-0eec405bc849"
east_ip_address:
type: string
description: Ipaddress for east network
default: "30.0.0.25"
west_private_net:
type: string
description: NetId of west network
default: "7f5f0429-7453-49b7-86cf-4f013fddbae4"
west_private_subnet:
type: string
description: SubnetId of west network
default: "33736de5-ccd9-4903-b6f0-0b272e66eda3"
west_ip_address:
type: string
description: Ipaddress for west network
default: "40.0.0.25"
resources:
server_security_group:
type: OS::Neutron::SecurityGroup
properties:
rules: [
{remote_ip_prefix: 0.0.0.0/0,
protocol: tcp,
port_range_min: 1,
port_range_max: 65535},
{remote_ip_prefix: 0.0.0.0/0,
protocol: udp,
port_range_min: 1,
port_range_max: 65535},
{remote_ip_prefix: 0.0.0.0/0,
protocol: icmp}]
east_agent_port:
type: OS::Neutron::Port
properties:
network_id: { get_param: east_private_net }
fixed_ips:
- subnet_id: { get_param: east_private_subnet }
ip_address: { get_param: east_ip_address }
security_groups: [{ get_resource: server_security_group }]
west_agent_port:
type: OS::Neutron::Port
properties:
network_id: { get_param: west_private_net }
fixed_ips:
- subnet_id: { get_param: west_private_subnet }
ip_address: { get_param: west_ip_address }
security_groups: [{ get_resource: server_security_group }]
test_primary_0:
type: OS::Nova::Server
properties:
name: test_primary_0
image: { get_param: image }
flavor: { get_param: flavor }
key_name: test
networks:
- network: { get_param: net_id }
- port: { get_resource: east_agent_port }
user_data_format: RAW
user_data:
str_replace:
template: |
#!/bin/sh
sudo dhclient ens4
sudo ip r d default via 30.0.0.1
sudo ip r a 40.0.0.0/24 via 30.0.0.10 dev ens4
screen -dmS shaker-agent-screen shaker-agent --server-endpoint=$SERVER_ENDPOINT --agent-id=$AGENT_ID --agent-socket-recv-timeout 10 --agent-socket-send-timeout 10
params:
"$SERVER_ENDPOINT": { get_param: server_endpoint }
"$AGENT_ID": "test_primary_0"
test_minion_0:
type: OS::Nova::Server
properties:
name: test_minion_0
image: { get_param: image }
flavor: { get_param: flavor }
key_name: test
networks:
- network: { get_param: net_id }
- port: { get_resource: west_agent_port }
user_data_format: RAW
user_data:
str_replace:
template: |
#!/bin/sh
sudo dhclient ens4
sudo ip r d default via 40.0.0.1
sudo ip r a 30.0.0.0/24 via 40.0.0.10 dev ens4
screen -dmS shaker-agent-screen shaker-agent --server-endpoint=$SERVER_ENDPOINT --agent-id=$AGENT_ID --agent-socket-recv-timeout 10 --agent-socket-send-timeout 10
params:
"$SERVER_ENDPOINT": { get_param: server_endpoint }
"$AGENT_ID": "test_minion_0"
outputs:
test_primary_0:
value: { get_attr: [ test_primary_0, instance_name ] }
test_minion_0:
value: { get_attr: [ test_minion_0, instance_name ] }
test_primary_0_ip:
value: { get_attr: [ test_primary_0, networks, east, 0 ] }
test_minion_0_ip:
value: { get_attr: [ test_minion_0, networks, west, 0 ] }

Now create the ubuntu_test.yaml file which will be given as input to shaker. It contains the test case scenarios execution details. We will also define to use the above HOT template in this yaml file for minion and primary VMs deployment.

### ubuntu_test.yaml
title: OpenStack VNF test
description:
In this scenario Shaker launches pairs of instances. Instances are
connected to one of 2 tenant networks, which plugged into the router VNF.
The traffic goes from one network to the other (L3 east-west).
deployment:
template: ubuntu_test.hot
accommodation: [pair, best_effort]
execution:
progression: quadratic
tests:
-
title: Download
class: flent
method: tcp_download
-
title: Upload
class: flent
method: tcp_upload
-
title: Bi-directional
class: flent
method: tcp_bidirectional

In the above test scenario we have mentioned to execute tcp_upload, tcp_download and tcp_bidirectional methods using flent tool to observe the throughput and bandwidth of the traffic passing through Ubuntu VM. Now to execute the below command to start the test scenario

# shaker --server-endpoint <host_ip:port> --scenario ubuntu_test.yaml --report ubuntu_test.html --stack-name test

Replace <host_ip:port> with the host_ip and port of your host.

Once the above command is triggered it launches heat stack with the minion and primary VMs, performs cloud-init configuration and then executes the traffic test cases across the VMs, collects the results in ubuntu_test.html and then cleans up the heat stack. Execution might take around 10–15mins.

Here is the topology view when the stack creation is completed and test case is execution is in progress. As noticed the minion and primary VMs are not connected through the openstack router using the east and west networks, they are connected using Ubuntu VM.

VM details launched on same hypervisor
Graph Topology view
Topology view when shaker is running testcases

Once the execution completes, we can render the ubuntu_test.html page in the browser to observe the results. Sample report is available here

Shaker report for ubuntu VM
Shaker bidirectional output

Detailed report can be found here.

We observe around 1ms of response time for ping test and around 4GB/s of throughput. This test case can be extended to execute using other network monitoring tools like iperf, netperf to get a better view of the performance of the VNF.

Conclusion

We have observed how shaker can be extended to test the network performance of simple ubuntu VM which can be extended to any VNF. Using shaker also gives us the flexibility to test the VNF under different network performance measuring tools like iperf, flent, netperf, iperf3 or any custom tool using shell support.

From the performance measured we can get a better view of how the VNF works in a VIM environment, enabling us to configure it appropriately and also allowing us to place it in a right VIM with appropriate hypervisor to get the best results out of the VNF.

Blog — 2 https://sirishagopigiri-692.medium.com/openstack-shaker-for-vyos-vnf-blog-2-8fc8a058de8e

Blog — 3 https://sirishagopigiri-692.medium.com/vnf-performance-measurement-using-shaker-across-openstack-nodes-blog-3-934e18bda220

References

  1. https://docs.openstack.org/victoria/
  2. https://pyshaker.readthedocs.io/en/latest/index.html
  3. https://docs.openstack.org/performance-docs/latest/test_plans/tenant_networking/shaker.html

--

--

Sirishagopigiri

Engineer by profession. Chef by passion (applicable only for some dishes :-P). Trying to become a blogger.