Deploy Juniper vMX via Docker Compose
Being able to download and run Juniper vMX on KVM and ESXi has really helped me learning more about networking, telemetry and build automation solutions. But the software dependencies combined with manual editing and launch of shell scripts per vMX instance felt a bit outdated to me.
Why can’t I just get a Docker Container with the vMX deployed using Docker, Docker Compose or Kubernetes? This would allow me to launch a fully virtualized topology with routers and endpoints with a single docker-compose command, try something out, redeploy with different Junos versions and even share the recipe with other users via public Git repo’s.

Well, this is now possible via pre-built Docker Containers, juniper/openjnpr-container-vmx, capable of launching any vMX qcow2 image from release 17.1 onwards. You do need to “bring your own” licensed Juniper vMX KVM bundle from the official Juniper download site https://www.juniper.net/support/downloads/?p=vmx.
Readers familiar with the vMX structure of using two virtual machines, one for the control plane (VCP) and another for the forwarding engine (VFP) might have spotted an error in the previous paragraph: how can one deploy a fully functional vMX by only providing the control plane virtual image? Well, this is actually possible, because the actual forwarding engine is downloaded at runtime from the control plane into the VFP, or in the container age, straight into the Container!
Use cases
Before I dive into the nitty gritty details on how the container actually works and how to use it, I’d like to point to a few use cases I built and published recently:
- Lab to experiment with DHCP v4 & v6 redundancy over active – active EVPN: https://gitlab.com/mwiget/metro-evpn-bbar-demo
- Junos BNG Pseudowire Headend Termination (PWHT): https://gitlab.com/mwiget/bng-pwht-demo
- Auto discover BGP unnumbered: https://gitlab.com/mwiget/bgp-unnumbered
They all share the use of the pre-built and published Docker container juniper/openjnpr-container-vmx combined with configuration files and containerized helper application, orchestrated via a docker-compose.yml file (and the externally sourced vMX control plane QCOW2 images).
The only host software package dependencies left to be installed by you are git, make, docker.io and docker-compose. Which Linux distribution you might ask? Well, I’ve used Ubuntu up to bionic (18.04) and Debian 9, but any other recent distribution should work equally well.
Requirements
- Any recent Linux baremetal server with at least 4GB of RAM and a CPU built in the last 4 years (Ivy Bridge and newer) with KVM acceleration support. Running this within a Virtual Machine however isn’t an option, not just for the dismal overall performance for running nested VM’s.
- Provisioned Memory Hugepages (1GB per vMX instance)
- junos-vmx-x86–64–17.3R1.10.qcow2 image, extracted from the vmx-bundle-*tgz file available at https://www.juniper.net/support/downloads/?p=vmx or as an eval download from https://www.juniper.net/us/en/dm/free-vmx-trial/ (registration required)
- Docker engine, Docker compose, git and make installed
That’s it. No need to install qemu-kvm, nor virsh, nor else, but don’t worry if you happen to have these installed. They won’t interfere with the containers. Everything the vMX needs to run, is provided by the docker Container. Neat, isn’t it?
Installation
Install or update the latest Linux distribution of choice to your server. Personally I’m using a mix of older Desktop and Laptop computers locally, but recently I’m using baremetal servers in the public cloud from https://www.hetzner.de/.
Update and install the required software (example shown for Ubuntu bionic, adjust accordingly):
$ sudo apt-get update
$ sudo apt-get install make git curl docker.io docker-compose
$ sudo usermod -aG docker $USER
The last command will add your userid to the docker group, allowing you access to docker commands without sudo.
Enable hugepages. Count 1GB per vMX instance, but make sure you leave at least 50% to all other applications. If your host has 16GB, you can dedicate e.g. 8GB to hugepages. This can be done with page sizes of 2MB or 1GB and is best done via kernel options in /etc/default/grub (example shows 8 x 1GB):
$ sudo grep hugepa /etc/default/grub
GRUB_CMDLINE_LINUX_DEFAULT="nomodeset consoleblank=0 default_hugepagesz=1G hugepagesz=1G hugepages=8"
$ sudo update-grub
$ sudo reboot
Once the system is back, check the allocated hugepages:
$ cat /proc/meminfo |grep Huge
AnonHugePages: 8372224 kB
ShmemHugePages: 0 kB
HugePages_Total: 8
HugePages_Free: 8
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 1048576 kB
Download the latest vmx-bundle-x.y.tgz (18.2R1 at the time I published this post) from https://www.juniper.net/support/downloads/?p=vmx and extract the qcow2 image. You can pick any version from 17.1 onwards, including service releases:
tar zxf vmx-bundle-18.2R1.9.tgz
$ mv vmx/images/junos-vmx-x86-64-18.2R1.9.qcow2 .
$ rm -rf vmx
$ ls -l junos-vmx-x86-64-18.2R1.9.qcow2
-rw-r--r-- 1 mwiget mwiget 1334771712 Jul 20 21:51 junos-vmx-x86-64-18.2R1.9.qcow2
Keep the qcow2 image and get rid of the rest from the extracted tar file. The qcow2 file will need to be copied into your project working directory, from where the container instances will be launched.
Build your own lab topology
Ready to get going? Create and empty directory and copy the junos-vmx qcow2 into it:
$ mkdir my-simple-vmx-lab
$ cd my-simple-vmx-lab
$ cp ../junos-vmx-x86-64-18.2R1.9.qcow2 .
Download the evaluation license key that activates your 60-day, unlimited bandwidth vMX trial. The same key can be used multiple times and the activation period is per running instance:
$ curl -o license-eval.txt https://www.juniper.net/us/en/dm/free-vmx-trial/E421992502.txt
Copy your SSH public key to the current directory:
cp ~/.ssh/id_rsa.pub .
If you don’t have a SSH public/private keypair, create one:
$ ssh-keygen -t rsa -N ""
Create your docker-compose.yml for your topology, e.g. using the following example:
$ cat docker-compose.yml
version: "3"
services:
vmx1:
image: juniper/openjnpr-container-vmx
privileged: true
tty: true
stdin_open: true
ports:
- "22"
- "830"
environment:
- ID=vmx1
- LICENSE=license-eval.txt
- IMAGE=junos-vmx-x86-64-18.2R1.9.qcow2
- PUBLICKEY=id_rsa.pub
- CONFIG=vmx1.conf
volumes:
- $PWD:/u:ro
- /var/run/docker.sock:/var/run/docker.sock
networks:
mgmt:
net-a:
net-b:
net-c:
vmx2:
image: juniper/openjnpr-container-vmx
privileged: true
tty: true
stdin_open: true
ports:
- "22"
- "830"
environment:
- ID=vmx2
- LICENSE=license-eval.txt
- IMAGE=junos-vmx-x86-64-18.2R1.9.qcow2
- PUBLICKEY=id_rsa.pub
- CONFIG=vmx2.conf
volumes:
- $PWD:/u:ro
- /var/run/docker.sock:/var/run/docker.sock
networks:
mgmt:
net-a:
net-b:
net-c:
networks:
mgmt:
net-a:
net-b:
net-c:
It defines 2 vMX instances, connected to 3 virtual docker networks, created also by the same docker-compose file.
If you are using a Junos release older than 18.2, then add the tag ‘trusty’ to the container image in the docker-compose.yml file, e.g.
image: juniper/openjnpr-container-vmx:trusty
There is one quirk with docker-compose: the network order attached to each instance is unpredictible (see https://github.com/docker/compose/issues/4645 for details). The vMX container works around the issue by ordering the virtual networks at runtime in alphabetic order. This requires access to the docker.sock via volume mount and allows the junos configuration be augmented with the Docker virtual network names.
The ‘$PWD:/u:ro’ volume attached to each container exposes the current host working directory to the container to access the vMX qcow2 image, the (optional) Junos configuration file, license key and your SSH public key.
Adjust the environment variable IMAGE in the docker-compose file to match your qcow2 image. You should have now the following files in your working directory:
$ ls
docker-compose.yml id_rsa.pub junos-vmx-x86-64-18.2R1.9.qcow2 license-eval.txt
Not much, right ;-)? But sufficient to get going. You might wonder when the actual openjnpr-container-vmx Container will get downloaded? This happens automatically by docker-compose at launch. See next step.
Launch your topology
$ docker-compose up -d
Creating network "mysimplevmxlab_net-c" with the default driver
Creating network "mysimplevmxlab_net-b" with the default driver
Creating network "mysimplevmxlab_net-a" with the default driver
Creating network "mysimplevmxlab_mgmt" with the default driver
Creating mysimplevmxlab_vmx2_1 ... done
Creating mysimplevmxlab_vmx1_1 ... done
The ‘-d’ option launches the instances in the background.
Verify the successful launch of the containers:
$ docker-compose ps
Name Command State Ports
------------------------------------------------------------------------------------------
mysimplevmxlab_vmx1_1 /launch.sh Up 0.0.0.0:32903->22/tcp, 0.0.0.0:32902->830/tcp
mysimplevmxlab_vmx2_1 /launch.sh Up 0.0.0.0:32901->22/tcp, 0.0.0.0:32900->830/tcp
That doesn’t mean they are fully up and running with active forwarding just yet, but they are booting up. You can now either wait a few minutes, or observe the launch of Junos and the forwarding engine via the containers console logs:
$ docker logs -f mysimplevmxlab_vmx1_1
Juniper Networks vMX Docker Light Container
Linux 8efcff791153 4.15.0-29-generic #31-Ubuntu SMP Tue Jul 17 15:39:52 UTC 2018 x86_64
CPU Model ................................ Intel(R) Core(TM) i7-7700 CPU @ 3.60GHz
CPU affinity of this container ........... 0-7
KVM hardware virtualization extension .... yes
Total System Memory ...................... 62 GB
Free Hugepages ........................... yes (8 x 1024 MB = 8192 MB)
Check for container privileged mode ...... yes
Check for sudo/root privileges ........... yes
Loop mount filesystem capability ......... yes
docker access ............................ CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
8efcff791153 juniper/openjnpr-container-vmx "/launch.sh" 3 seconds ago Up Less than a second 0.0.0.0:32903->22/tcp, 0.0.0.0:32902->830/tcp mysimplevmxlab_vmx1_1
55f5338c8b71 juniper/openjnpr-container-vmx "/launch.sh" 3 seconds ago Up 1 second 0.0.0.0:32901->22/tcp, 0.0.0.0:32900->830/tcp mysimplevmxlab_vmx2_1
c2ef1bdf83a9 juniper/openjnpr-container-vmx:bionic "/launch.sh" 2 hours ago Up 2 hours 0.0.0.0:32857->22/tcp, 0.0.0.0:32856->830/tcp leaf1
d6b99f4325fa juniper/openjnpr-container-vmx:bionic "/launch.sh" 2 hours ago Up 2 hours 0.0.0.0:32851->22/tcp, 0.0.0.0:32850->830/tcp spine2
722d7e27ae90 juniper/openjnpr-container-vmx:bionic "/launch.sh" 2 hours ago Up 2 hours 0.0.0.0:32865->22/tcp, 0.0.0.0:32864->830/tcp spine1
7fd41ed26279 marcelwiget/vmx-docker-light:latest "/launch.sh" 2 hours ago Up 2 hours 0.0.0.0:32843->22/tcp, 0.0.0.0:32842->830/tcp vmxdockerlight_vmx1_1
304ec56a0400 marcelwiget/vmx-docker-light:latest "/launch.sh" 2 hours ago Up 2 hours 0.0.0.0:32845->22/tcp, 0.0.0.0:32844->830/tcp vmxdockerlight_vmx2_1
cdcaa6014fc2 marcelwiget/dhcptester:latest "/usr/bin/dhcptester…" 34 hours ago Up 34 hours metroevpnbbardemo_dhcptester_1
de3ceea835a0 metroevpnbbardemo_dhcpclient "/launch.sh" 34 hours ago Up 34 hours metroevpnbbardemo_dhcpclient_1
0111d21510ef metroevpnbbardemo_keadhcp6 "/sbin/tini /launch.…" 34 hours ago Up 34 hours metroevpnbbardemo_keadhcp6_1
742118e1b5ca metroevpnbbardemo_dhcp4server "/sbin/tini /launch.…" 34 hours ago Up 34 hours metroevpnbbardemo_dhcp4server_1
b331054554a8 juniper/openjnpr-container-vmx:trusty "/launch.sh" 34 hours ago Up 34 hours 0.0.0.0:32839->22/tcp, 0.0.0.0:32838->830/tcp metroevpnbbardemo_bbar2_1
68154ba01b10 juniper/openjnpr-container-vmx:trusty "/launch.sh" 34 hours ago Up 34 hours 0.0.0.0:32837->22/tcp, 0.0.0.0:32836->830/tcp metroevpnbbardemo_bbar1_1
24c4846bd4d5 juniper/openjnpr-container-vmx:trusty "/launch.sh" 34 hours ago Up 34 hours 0.0.0.0:32835->22/tcp, 0.0.0.0:32834->830/tcp metroevpnbbardemo_core1_1
yes
lcpu affinity ............................ 0-7
NUMA node(s): 1
NUMA node0 CPU(s): 0-7
system dependencies ok
/u contains the following files:
docker-compose.yml junos-vmx-x86-64-18.2R1.9.qcow2
id_rsa.pub license-eval.txt
/fix_network_order.sh: trying to fix network interface order via docker inspect myself ...
MACS=02:42:c0:a8:50:03 02:42:c0:a8:40:03 02:42:c0:a8:30:03 02:42:c0:a8:20:03
02:42:c0:a8:50:03 eth0 == eth0
02:42:c0:a8:40:03 eth1 == eth1
02:42:c0:a8:30:03 eth3 -> eth2
FROM eth3 () TO eth2 ()
Actual changes:
tx-checksumming: off
tx-checksum-ip-generic: off
tx-checksum-sctp: off
tcp-segmentation-offload: off
tx-tcp-segmentation: off [requested on]
tx-tcp-ecn-segmentation: off [requested on]
tx-tcp-mangleid-segmentation: off [requested on]
tx-tcp6-segmentation: off [requested on]
Actual changes:
tx-checksumming: off
tx-checksum-ip-generic: off
tx-checksum-sctp: off
tcp-segmentation-offload: off
tx-tcp-segmentation: off [requested on]
tx-tcp-ecn-segmentation: off [requested on]
tx-tcp-mangleid-segmentation: off [requested on]
tx-tcp6-segmentation: off [requested on]
02:42:c0:a8:20:03 eth3 == eth3
using qcow2 image junos-vmx-x86-64-18.2R1.9.qcow2
LICENSE=license-eval.txt
269: eth0@if270: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default
link/ether 02:42:c0:a8:50:03 brd ff:ff:ff:ff:ff:ff link-netnsid 0
Interface IPv6 address
Bridging (/02:42:c0:a8:50:03) with fxp0
Current MAC: 02:42:c0:a8:50:03 (unknown)
Permanent MAC: 00:00:00:00:00:00 (XEROX CORPORATION)
New MAC: 00:1d:20:7d:12:45 (COMTREND CO.)
-----------------------------------------------------------------------
vMX mysimplevmxlab_vmx1_1 (192.168.80.3) 18.2R1.9 root password vohdaiph4veekah1as5raeSi
-----------------------------------------------------------------------
bridge name bridge id STP enabled interfaces
br-ext 8000.001d207d1245 no eth0
fxp0
br-int 8000.16d69246dcb7 no em1
Creating config drive /tmp/configdrive.qcow2
METADISK=/tmp/configdrive.qcow2 CONFIG=/tmp/vmx1.conf LICENSE=/u/license-eval.txt
Creating config drive (configdrive.img) ...
extracting licenses from /u/license-eval.txt
writing license file config_drive/config/license/E435890758.lic ...
adding config file /tmp/vmx1.conf
-rw-r--r-- 1 root root 458752 Jul 26 13:45 /tmp/configdrive.qcow2
Creating empty /tmp/vmxhdd.img for VCP ...
Starting PFE ...
Booting VCP ...
Waiting for VCP to boot... Consoles: serial port
BIOS drive A: is disk0
BIOS drive C: is disk1
BIOS drive D: is disk2
BIOS drive E: is disk3
BIOS 639kB/1047424kB available memory
FreeBSD/x86 bootstrap loader, Revision 1.1
(builder@feyrith.juniper.net, Thu Jun 14 14:21:45 PDT 2018)
-
Booting from Junos volume ...
|
/packages/sets/pending/boot/os-kernel/kernel text=0x443df8 data=0x82258+0x290990 syms=[0x8+0x94aa0+0x8+0x814cd]
/packages/sets/pending/boot/junos-net-platform/mtx_re.ko size 0x2239a0
. . .
Terminate the logs via Ctrl-C anytime. This won’t interrupt the instance.
Take note of the password line in the log file. It not only contains the auto-generated plain-text root password, but also the vMX version and its management IP address:
$ docker-compose logs|grep password
vmx2_1 | vMX mysimplevmxlab_vmx2_1 (192.168.80.2) 18.2R1.9 root password otheem4ocahTh3aej6ah2oos
vmx1_1 | vMX mysimplevmxlab_vmx1_1 (192.168.80.3) 18.2R1.9 root password vohdaiph4veekah1as5raeSi
Log into your vMX’s using SSH to the shown IP addresses. You might rightfully ask yourself, who assigned this IP address and how did the vMX learn about it? And why am I not asked for a password? Magic? Well no ;-), just good automation done by the launch container: It copies the users SSH public key (id_rsa.pub) and the assigned IP of the containers eth0 interface into the Junos configuration for your userid and fxp0 within the apply group openjnpr-container-vmx. You can see this in the output below:
$ ssh 192.168.80.3
The authenticity of host '192.168.80.3 (192.168.80.3)' can't be established.
ECDSA key fingerprint is SHA256:nZn+TFQgh5xQshQIeoiCb79kCWBgYPVt2VNgXfsw6Zc.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.80.3' (ECDSA) to the list of known hosts.
--- JUNOS 18.2R1.9 Kernel 64-bit JNPR-11.0-20180614.6c3f819_buil
mwiget@mysimplevmxlab_vmx1_1> show configuration groups openjnpr-container-vmx
system {
configuration-database {
ephemeral {
instance openjnpr-container-vmx-vfp0;
}
}
login {
user mwiget {
uid 2000;
class super-user;
authentication {
encrypted-password "$1$Quohz5fu$XAlF3qxxESywZDUY52PuI/"; ## SECRET-DATA
ssh-rsa "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDLxUbwJ8sJD1euXqRvnU8tblaNGYWVdcGVksYu2GKwmfGtadEhtN5nG4trGBR3wMBse2HEe/Fhg4IVIFqAmvxQ0hj5KvZRnYg3eQYouLF8UprRM5a9IzYIlBjdwYMQaNIwDOh/TfV+W1famLSkPdXAiX/1Tq9YXzsBtSkfLWlKanx/np6ZhamC+Wfsh7jAIJsqB0gLWId2yl/hVV8lDCnL7WvuPby8IMKI1oWNdQkl87lb34ot8WsnYxtgPwNNTwhNLjc7byTuj+B7olZczWSWexDscd+xmXA7F6OR8riIZvY/z/OaLn2r+pUNSHwXXAqoNM5KDbIpXKP8fagbSS5B mwiget@sb"; ## SECRET-DATA
}
}
}
root-authentication {
encrypted-password "$1$Quohz5fu$XAlF3qxxESywZDUY52PuI/"; ## SECRET-DATA
ssh-rsa "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDLxUbwJ8sJD1euXqRvnU8tblaNGYWVdcGVksYu2GKwmfGtadEhtN5nG4trGBR3wMBse2HEe/Fhg4IVIFqAmvxQ0hj5KvZRnYg3eQYouLF8UprRM5a9IzYIlBjdwYMQaNIwDOh/TfV+W1famLSkPdXAiX/1Tq9YXzsBtSkfLWlKanx/np6ZhamC+Wfsh7jAIJsqB0gLWId2yl/hVV8lDCnL7WvuPby8IMKI1oWNdQkl87lb34ot8WsnYxtgPwNNTwhNLjc7byTuj+B7olZczWSWexDscd+xmXA7F6OR8riIZvY/z/OaLn2r+pUNSHwXXAqoNM5KDbIpXKP8fagbSS5B mwiget@sb"; ## SECRET-DATA
}
host-name mysimplevmxlab_vmx1_1;
services {
ssh {
client-alive-interval 30;
}
netconf {
ssh;
}
}
syslog {
file messages {
any notice;
}
}
}
interfaces {
fxp0 {
unit 0 {
family inet {
address 192.168.80.3/20;
}
}
}
}
mwiget@mysimplevmxlab_vmx1_1> show configuration apply-groups
## Last commit: 2018-07-26 13:48:25 UTC by root
apply-groups openjnpr-container-vmx;
The same ssh public key is also given to the root account, plus ssh and netconf are activated. Remember we haven’t even created the Junos configuration files referenced by the docker-compose.yml file, vmx1.conf and vmx2.conf. You can provide those, e.g. by saving the configurations from your running instances. Make sure they contain the ‘apply-groups’ statement, otherwise a relaunch won’t learn the new management IP address:
$ ssh 192.168.80.3 show conf > vmx1.conf.new
$ ls -l vmx1.conf*
-rw-rw-r-- 1 mwiget mwiget 2212 Jul 26 15:59 vmx1.conf.new
Lets check the forwarding engine and its interfaces:
$ ssh 192.168.80.3
Last login: Thu Jul 26 13:55:09 2018 from 192.168.80.1
--- JUNOS 18.2R1.9 Kernel 64-bit JNPR-11.0-20180614.6c3f819_buil
mwiget@mysimplevmxlab_vmx1_1> show interfaces descriptions
Interface Admin Link Description
ge-0/0/0 up up mysimplevmxlab_net-a
ge-0/0/1 up up mysimplevmxlab_net-b
ge-0/0/2 up up mysimplevmxlab_net-c
fxp0 up up mysimplevmxlab_mgmt
mwiget@mysimplevmxlab_vmx1_1>
Cool. Everything looks good. But where do the interface descriptions come from? They haven’t been added to the apply group, but added at runtime into an ephemeral DB storage called openjnpr-container-vmx:
mwiget@mysimplevmxlab_vmx1_1> show ephemeral-configuration instance openjnpr-container-vmx-vfp0
## Last changed: 2018-07-26 13:48:36 UTC
interfaces {
ge-0/0/0 {
description mysimplevmxlab_net-a;
}
ge-0/0/1 {
description mysimplevmxlab_net-b;
}
ge-0/0/2 {
description mysimplevmxlab_net-c;
}
fxp0 {
description mysimplevmxlab_mgmt;
}
}
and this ephemeral db instance is activated automatically via the following configuration statement as part of the apply group:
mwiget@mysimplevmxlab_vmx1_1> show configuration groups |display set |match ephemeral
set groups openjnpr-container-vmx system configuration-database ephemeral instance openjnpr-container-vmx-vfp0
Not sure about you, but IMHO Junos really rocks, offering such comprehensive automation technique.
Next steps: Configure your vMX instances to your needs and test them. The virtual network interfaces are built via linux bridges, so don’t expect much performance. They also block things like LLDP and LACP. If you need to use such L2 protocols (that are blocked by bridges), revert to other Docker network drivers like macvlan. But VLAN’s are supported over the default docker networks. Check out the example repo’s shown at the beginning of this blog post.
While building this example, I had a few other vMX’s running on the same Linux server, 10 in total. An interesting way to see the vMX’s forwarding daemon (riot), all running natively on the Linux host (though isolated within their containers namespaces), use this:
$ ps ax|grep riot|grep lcores| wc -l
10
$ ps ax|grep riot|grep lcores
562 pts/0 Sl 19:09 /home/pfe/riot/build/app/riot -c 0xff -n 2 --lcores=0@0,1@1,2@2,3@3,4@4,5@5,6@6,7@7 --log-level=5 --vdev net_pcap0,iface=eth1,mac=02:42:c0:a8:10:02 --vdev net_pcap1,iface=eth2,mac=02:42:c0:a8:00:02 --vdev net_pcap2,iface=eth3,mac=02:42:ac:1f:00:02 --vdev net_pcap3,iface=eth4,mac=02:42:ac:1e:00:02 --vdev net_pcap4,iface=eth5,mac=02:42:ac:1d:00:02 --no-pci -m 1024 -- --rx (0,0,0,2),(1,0,1,2),(2,0,2,2),(3,0,3,2),(4,0,4,2), --tx (0,2),(1,2),(2,2),(3,2),(4,2), --w 3,4,5,6,7 --rpio local,3000,3001 --hostif local,3002 --bsz (32,32),(32,32),(32,32)
781 pts/0 Sl 19:07 /home/pfe/riot/build/app/riot -c 0xff -n 2 --lcores=0@0,1@1,2@2,3@3,4@4,5@5,6@6,7@7 --log-level=5 --vdev net_pcap0,iface=eth1,mac=02:42:ac:18:00:03 --vdev net_pcap1,iface=eth2,mac=02:42:ac:19:00:02 --vdev net_pcap2,iface=eth3,mac=02:42:ac:1a:00:02 --vdev net_pcap3,iface=eth4,mac=02:42:ac:1b:00:02 --vdev net_pcap4,iface=eth5,mac=02:42:ac:1c:00:02 --no-pci -m 1024 -- --rx (0,0,0,2),(1,0,1,2),(2,0,2,2),(3,0,3,2),(4,0,4,2), --tx (0,2),(1,2),(2,2),(3,2),(4,2), --w 3,4,5,6,7 --rpio local,3000,3001 --hostif local,3002 --bsz (32,32),(32,32),(32,32)
897 pts/0 Sl 20:12 /home/pfe/riot/build/app/riot -c 0xff -n 2 --lcores=0@0,1@1,2@2,3@3,4@4,5@5,6@6,7@7 --log-level=5 --vdev net_pcap0,iface=eth1,mac=02:42:ac:18:00:02 --vdev net_pcap1,iface=eth2,mac=02:42:ac:19:00:03 --vdev net_pcap2,iface=eth3,mac=02:42:ac:1a:00:03 --vdev net_pcap3,iface=eth4,mac=02:42:ac:1b:00:03 --vdev net_pcap4,iface=eth5,mac=02:42:ac:1c:00:03 --vdev net_pcap5,iface=eth6,mac=02:42:c0:a8:10:03 --vdev net_pcap6,iface=eth7,mac=02:42:c0:a8:00:03 --vdev net_pcap7,iface=eth8,mac=02:42:ac:1f:00:03 --vdev net_pcap8,iface=eth9,mac=02:42:ac:1e:00:03 --vdev net_pcap9,iface=eth10,mac=02:42:ac:1d:00:03 --no-pci -m 1024 -- --rx (0,0,0,2),(1,0,1,2),(2,0,2,2),(3,0,3,2),(4,0,4,2),(5,0,5,2),(6,0,6,2),(7,0,7,2),(8,0,8,2),(9,0,9,2), --tx (0,2),(1,2),(2,2),(3,2),(4,2),(5,2),(6,2),(7,2),(8,2),(9,2), --w 3,4,5,6,7 --rpio local,3000,3001 --hostif local,3002 --bsz (32,32),(32,32),(32,32)
2247 pts/0 Sl 212:32 /home/pfe/riot/build/app/riot -c 0xff -n 2 --lcores=0@0,1@1,2@2,3@3,4@4,5@5,6@6,7@7 --log-level=5 --vdev eth_pcap0,iface=eth1,mac=02:42:0a:0a:00:02 --vdev eth_pcap1,iface=eth2,mac=02:42:0a:63:00:02 --no-pci -m 1024 -- --rx (0,0,0,2),(1,0,1,2), --tx (0,2),(1,2), --w 3,4,5,6,7 --rpio local,3000,3001 --hostif local,3002 --bsz (32,32),(32,32),(32,32)
2550 pts/0 Sl 231:18 /home/pfe/riot/build/app/riot -c 0xff -n 2 --lcores=0@0,1@1,2@2,3@3,4@4,5@5,6@6,7@7 --log-level=5 --vdev eth_pcap0,iface=eth1,mac=02:42:ac:12:00:03 --vdev eth_pcap1,iface=eth2,mac=02:42:0a:02:00:03 --vdev eth_pcap2,iface=eth3,mac=02:42:0a:0a:00:04 --no-pci -m 1024 -- --rx (0,0,0,2),(1,0,1,2),(2,0,2,2), --tx (0,2),(1,2),(2,2), --w 3,4,5,6,7 --rpio local,3000,3001 --hostif local,3002 --bsz (32,32),(32,32),(32,32)
2712 pts/0 Sl 236:51 /home/pfe/riot/build/app/riot -c 0xff -n 2 --lcores=0@0,1@1,2@2,3@3,4@4,5@5,6@6,7@7 --log-level=5 --vdev eth_pcap0,iface=eth1,mac=02:42:ac:12:00:02 --vdev eth_pcap1,iface=eth2,mac=02:42:0a:02:00:02 --vdev eth_pcap2,iface=eth3,mac=02:42:0a:0a:00:03 --no-pci -m 1024 -- --rx (0,0,0,2),(1,0,1,2),(2,0,2,2), --tx (0,2),(1,2),(2,2), --w 3,4,5,6,7 --rpio local,3000,3001 --hostif local,3002 --bsz (32,32),(32,32),(32,32)
21007 pts/0 Sl 2:42 /home/pfe/riot/build/app/riot -c 0xff -n 2 --lcores=0@0,1@1,2@2,3@3,4@4,5@5,6@6,7@7 --log-level=5 --vdev net_pcap0,iface=eth1,mac=02:42:c0:a8:40:02 --vdev net_pcap1,iface=eth2,mac=02:42:c0:a8:30:02 --vdev net_pcap2,iface=eth3,mac=02:42:c0:a8:20:02 --no-pci -m 1024 -- --rx (0,0,0,2),(1,0,1,2),(2,0,2,2), --tx (0,2),(1,2),(2,2), --w 3,4,5,6,7 --rpio local,3000,3001 --hostif local,3002 --bsz (32,32),(32,32),(32,32)
21379 pts/0 Sl 2:41 /home/pfe/riot/build/app/riot -c 0xff -n 2 --lcores=0@0,1@1,2@2,3@3,4@4,5@5,6@6,7@7 --log-level=5 --vdev net_pcap0,iface=eth1,mac=02:42:c0:a8:40:03 --vdev net_pcap1,iface=eth2,mac=02:42:c0:a8:30:03 --vdev net_pcap2,iface=eth3,mac=02:42:c0:a8:20:03 --no-pci -m 1024 -- --rx (0,0,0,2),(1,0,1,2),(2,0,2,2), --tx (0,2),(1,2),(2,2), --w 3,4,5,6,7 --rpio local,3000,3001 --hostif local,3002 --bsz (32,32),(32,32),(32,32)
31702 pts/0 Sl 24:01 /home/pfe/riot/build/app/riot -c 0xff -n 2 --lcores=0@0,1@1,2@2,3@3,4@4,5@5,6@6,7@7 --log-level=5 --vdev net_pcap0,iface=eth1,mac=02:42:ac:15:00:03 --vdev net_pcap1,iface=eth2,mac=02:42:ac:14:00:02 --no-pci -m 1024 -- --rx (0,0,0,2),(1,0,1,2), --tx (0,2),(1,2), --w 3,4,5,6,7 --rpio local,3000,3001 --hostif local,3002 --bsz (32,32),(32,32),(32,32)
31887 pts/0 Sl 24:02 /home/pfe/riot/build/app/riot -c 0xff -n 2 --lcores=0@0,1@1,2@2,3@3,4@4,5@5,6@6,7@7 --log-level=5 --vdev net_pcap0,iface=eth1,mac=02:42:ac:15:00:02 --vdev net_pcap1,iface=eth2,mac=02:42:ac:14:00:03 --no-pci -m 1024 -- --rx (0,0,0,2),(1,0,1,2), --tx (0,2),(1,2), --w 3,4,5,6,7 --rpio local,3000,3001 --hostif local,3002 --bsz (32,32),(32,32),(32,32)
Terminate the instances
$ docker-compose down
Final words
That’s it. Hope you enjoyed reading this blog post. Let me know in the comment section. If you are intersted in how the container is built, you can check its source here:
https://github.com/Juniper/OpenJNPR-Container-vMX
Please check out also https://www.tesuto.com/, which offers cloud based network emulation at scale. They use some of the techniques of this vMX container to bring up Juniper vMX .
Why is nested vm not possible?
Docker running inside a VM
LikeLike
KVM acceleration kernel module must be loaded (which is doable with VMWare), but I experienced riot crashes. Overall it is just too slow, not worth the effort.
LikeLike
Hi,
I see this error
ERROR: for jlab_vmx1_1 Cannot create container for service vmx1: create .: volume name is too short
Any info on this one?
LikeLike
Remove line 23 from docker-compose.yml. https://github.com/Juniper/OpenJNPR-Container-vMX/blob/master/docker-compose.yml#L23
I’ve seen these kind of errors if that folder doesn’t exist. HDDIMAGE is only needed when one wants to have persistent storage across reboots of the host.
LikeLiked by 1 person
Thanks for the quick reply. You have a nice blog. The error was actually a known issue from docker of having an incorrect PWD
-Rakesh
https://r2079.wordpress.com
LikeLiked by 1 person
Thank you. Didn’t know about the PWD issue.
LikeLike
Hi Marcel,
My VCP boots fine but I don´t see any Gigabit interfaces and I think thats because the VFP has a problem or does not boot up fine despite several hacks and reboots:
Should I be worried that it says No such file or directory?
root@myopenjnprlabs_vmx2_1>
patching start_vmxt.sh …
use cpu 11 for Junos
patching riot.tgz …
patching file riot/nested_env.sh
Hunk #1 succeeded at 86 (offset 16 lines).
Hunk #2 FAILED at 83.
Hunk #3 succeeded at 108 with fuzz 2 (offset 9 lines).
1 out of 3 hunks FAILED — saving rejects to file riot/nested_env.sh.rej
patching file riot/nested_env.sh
Hunk #1 succeeded at 91 with fuzz 2.
patching file riot/start_riot.sh
patching file riot/device_list.sh
Hunk #1 succeeded at 90 (offset 1 line).
Hunk #2 succeeded at 135 with fuzz 1 (offset 1 line).
patching done. Uploading riot_lnx.tgz to VCP …
starting mpcsd
sh: /usr/share/pfe/set_fips_optest.sh: No such file or directory
fpc.core.push.sh: no process found
mpc :
tnp_hello_tx: no process found
cat: /var/jnx/card/local/type: No such file or directory
tx_hello_tx: Failed to get card type defaulting to 0
cat: /var/jnx/card/local/slot: No such file or directory
tx_hello_tx: Failed to get card slot defaulting to 0
tnp_hello_tx: Board type 0
tnp_hello_tx: Board slot 0
Setting Up DPDK for Docker env
0x0BAA
cat: /var/jnx/card/local/type: No such file or directory
grep: /etc/riot/init.conf: No such file or directory
cat: /usr/share/pfe/pcie_add.cfg: No such file or directory
cat: /usr/share/pfe/pcie_add.cfg: No such file or directory
start_riot.sh: line 192: python: command not found
logger: socket /dev/log: No such file or directory
————————–
user1@myopenjnprlabs_vmx2_1> show chassis fpc
Temp CPU Utilization (%) CPU Utilization (%) Memory Utilization (%)
Slot State (C) Total Interrupt 1min 5min 15min DRAM (MB) Heap Buffer
0 Present Absent
….
user1@myopenjnprlabs_vmx2_1> show chassis hardware
Hardware inventory:
Item Version Part number Serial number Description
Chassis VMxxxxxxxxxxx VMX
Midplane
Routing Engine 0 RE-VMX
CB 0 VMX SCB
FPC 0 Virtual FPC
CPU
————————–
user1@docker1-server1:~/MY-LABS/MY-OpenJNPR-LABS$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
137fbc31a4e7 juniper/openjnpr-container-vmx “/launch.sh” 33 minutes ago Up 33 minutes 0.0.0.0:32770->22/tcp, 0.0.0.0:32769->830/tcp myopenjnprlabs_vmx1_1
69a02c6f85c4 juniper/openjnpr-container-vmx “/launch.sh” 33 minutes ago Up 33 minutes 0.0.0.0:32775->22/tcp, 0.0.0.0:32774->830/tcp myopenjnprlabs_vmx2_1
————————–
user1@docker1-server1$ lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 24
On-line CPU(s) list: 0-23
Thread(s) per core: 1
Core(s) per socket: 6
Socket(s): 4
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 44
Model name: Intel(R) Xeon(R) CPU X5650 @ 2.67GHz
Stepping: 2
CPU MHz: 2660.000
BogoMIPS: 5320.00
Virtualization: VT-x
Hypervisor vendor: VMware
Virtualization type: full
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 12288K
NUMA node0 CPU(s): 0-11
NUMA node1 CPU(s): 12-23
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon nopl xtopology tsc_reliable nonstop_tsc cpuid pni pclmulqdq vmx ssse3 cx16 sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes hypervisor lahf_lm pti ssbd ibrs ibpb stibp tpr_shadow vnmi ept vpid tsc_adjust arat flush_l1d arch_capabilities
LikeLike
You are running nested (using VMWare). This is not supported and the reason why VFP fails to launch.
LikeLike
Thanks for the response Marcel,
but as you can see from my lspcu output (flags section)
it seems to me that all the Hardware CPU flags are exposed to the VM so I don´t think KVM/Qemu/Riot would know that they are running nested?
Am I wrong in this assumption?
Do you know which feature exactly (or the lack of it) makes the VFP not run in a VM?
LikeLike
There are at least 2 issues currently with vmware: the first one is start_riot.sh, which has some vmware related hooks built in, which don‘t work in the container version. You see this failing in your log at line 192. While one could patch the script to bypass this issue (container uses a slightly modified method to build the interface list for riot).
The second issue I‘ve seen happening is when launching riot in the container. It often crashes right away, despite running in a nested environment (which affects VCP). Never managed to get to the root cause, mainly because I don‘t have source code access to riot and it normally uses a windriver based linux libc, where as the container version is ubuntu based.
We have proof of the container working nested in google cloud where kvm acceleration is available.
Hope this helps. For me, the use case was never compelling enough to invest my own time into getting vmware support.
LikeLike
thanks Marcel!
I guess i have to resort to bare metal as well…
LikeLike
Hey Marcel,
Do you think this OpenJNPR docker container would run fine (including the VCP/VFP) on Ubuntu on KVM?
I tend to think that docker uses the same Hardware resources as the Base OS (in this case Ubuntu)
Would KVM get in the way of VFP working fine?
This is the problem am facing and maybe you can help:
I have a bunch of 10 bare metal servers and I need to setup an OpenJNPR dockerized VMX lab environment at work which will be shared by several people.
My dilemma is how do I cluster these 10 bare metal servers without using a hypervisor (since VMware at least seems to prevent VFP from working in OpenJNPR)…wondering if KVM will work
Thing is I need all the docker instances to run on the same ´logical server´ and utilise all the available HW resources in the pool of servers without running each physical server separately.
any tips?
LikeLike
well, why not use an external switch and multiple NICs? You can use macvlan driver in docker to share a physical nic with multiple vMX interfaces, with and without clan tags.
If you need large scale vMX deployments for testing, have a serious look at https://www.tesuto.com, there you can build topologies of any size using vMX and others.
LikeLike
Thanks but what am looking for is how to cluster many physical servers into 1 huge logical server that has plenty of RAM and CPU to allow me to run many OpenJNPR docker instances on top of that logical server shared by many people.
This seems to be a challenge unless I run OpenJNPR directly on bare metal.
LikeLike
well, it probably can be made to work (vMX nested on top of ubuntu kvm) and hopefully someone will try it out and contribute to the project where needed. It might just work, but I don’t have cycles for it, as I don’t have a use case myself.
LikeLike
Hi Marcel,
What am I doing wrong here?
(Note I have already deleted line 23 in docker-compose.yml as you indicated in comment 711)
————–
labadmin@node3:~/OpenJNPR-Container-vMX$ pwd
/home/labadmin/OpenJNPR-Container-vMX
————–
labadmin@node3:~/OpenJNPR-Container-vMX$ sudo docker-compose up -d
WARNING: The PWD variable is not set. Defaulting to a blank string.
Creating openjnprcontainervmx_vmx3_1 …
Creating openjnprcontainervmx_vmx2_1 …
Creating openjnprcontainervmx_vmx1_1 …
Creating openjnprcontainervmx_vmx4_1 …
Creating openjnprcontainervmx_vmx2_1
Creating openjnprcontainervmx_vmx3_1
Creating openjnprcontainervmx_vmx1_1
Creating openjnprcontainervmx_vmx2_1 … error
Creating openjnprcontainervmx_vmx4_1 … error
Creating openjnprcontainervmx_vmx1_1 … error
Creating openjnprcontainervmx_vmx3_1 … error
ERROR: for openjnprcontainervmx_vmx3_1 Cannot create container for service vmx3: create .: volume name is too short, names should be at least two alphanumeric characters
ERROR: for vmx3 Cannot create container for service vmx3: create .: volume name is too short, names should be at least two alphanumeric characters
ERROR: for vmx2 Cannot create container for service vmx2: create .: volume name is too short, names should be at least two alphanumeric characters
ERROR: for vmx1 Cannot create container for service vmx1: create .: volume name is too short, names should be at least two alphanumeric characters
ERROR: for vmx4 Cannot create container for service vmx4: create .: volume name is too short, names should be at least two alphanumeric characters
ERROR: Encountered errors while bringing up the project.
————–
labadmin@node3:~/OpenJNPR-Container-vMX$ ls -ltrh
total 1.4G
-rw-r–r– 1 labadmin labadmin 1.4G Dec 17 2018 junos-vmx-x86-64-18.4R1.8.qcow2
-rw-rw-r– 1 labadmin labadmin 915 Nov 13 15:50 Makefile
-rw-rw-r– 1 labadmin labadmin 12K Nov 13 15:50 LICENSE
-rw-rw-r– 1 labadmin labadmin 37 Nov 13 15:50 vmx1.conf
drwxrwxr-x 2 labadmin labadmin 4.0K Nov 13 15:50 src
drwxrwxr-x 2 labadmin labadmin 4.0K Nov 13 15:50 regression
-rw-rw-r– 1 labadmin labadmin 21K Nov 13 15:50 README.md
-rwxrwxr-x 1 labadmin labadmin 793 Nov 13 15:50 getpass.sh
-rw-rw-r– 1 labadmin labadmin 1.3K Nov 13 15:50 docker-compose.yml-old
-rw-rw-r– 1 labadmin labadmin 2.7K Nov 13 16:04 docker-compose.yml
-rw-rw-r– 1 labadmin labadmin 204 Nov 13 16:08 license-eval.txt
-rw-r–r– 1 labadmin labadmin 408 Nov 13 16:13 id_rsa.pub
drwxrwxr-x 2 labadmin labadmin 4.0K Nov 13 16:26 images
———–
labadmin@node3:~/OpenJNPR-Container-vMX$ cat docker-compose.yml
# Copyright (c) 2017, Juniper Networks, Inc.
# # All rights reserved.
#
version: “3”
services:
vmx1:
image: juniper/openjnpr-container-vmx:bionic
privileged: true
tty: true
stdin_open: true
ports:
– “22”
– “830”
environment:
– ID=vmx1
– LICENSE=license-eval.txt
# – IMAGE=junos-vmx-x86-64-18.4R1.8.qcow2
– IMAGE=junos-vmx-x86-64-18.4R1.8.qcow2
– PUBLICKEY=id_rsa.pub
– CONFIG=vmx1.conf
– NUMCPUS=4
# – HDDIMAGE=/images/p1.qcow2 # if we want it to be persistent
volumes:
– $PWD/images:/images
– $PWD:/u:ro
– /var/run/docker.sock:/var/run/docker.sock
networks:
mgmt:
net-a:
net-b:
net-c:
net-d:
vmx2:
image: juniper/openjnpr-container-vmx:bionic
privileged: true
tty: true
stdin_open: true
ports:
– “22”
– “830”
environment:
– ID=vmx2
– LICENSE=license-eval.txt
– IMAGE=junos-vmx-x86-64-18.4R1.8.qcow2
– PUBLICKEY=id_rsa.pub
– CONFIG=vmx2.conf
– NUMCPUS=4
volumes:
– $PWD/images:/images
– $PWD:/u:ro
– /var/run/docker.sock:/var/run/docker.sock
networks:
mgmt:
net-a:
net-b:
net-c:
net-d:
vmx3:
image: juniper/openjnpr-container-vmx:bionic
privileged: true
tty: true
stdin_open: true
ports:
– “22”
– “830”
environment:
– ID=vmx3
– LICENSE=license-eval.txt
# – IMAGE=junos-vmx-x86-64-18.4R1.8.qcow2
– IMAGE=junos-vmx-x86-64-18.4R1.8.qcow2
– PUBLICKEY=id_rsa.pub
– CONFIG=vmx1.conf
– NUMCPUS=4
# – HDDIMAGE=/images/p1.qcow2 # if we want it to be persistent
volumes:
– $PWD/images:/images
– $PWD:/u:ro
– /var/run/docker.sock:/var/run/docker.sock
networks:
mgmt:
net-a:
net-b:
net-c:
net-d:
vmx4:
image: juniper/openjnpr-container-vmx:bionic
privileged: true
tty: true
stdin_open: true
ports:
– “22”
– “830”
environment:
– ID=vmx4
– LICENSE=license-eval.txt
# – IMAGE=junos-vmx-x86-64-18.4R1.8.qcow2
– IMAGE=junos-vmx-x86-64-18.4R1.8.qcow2
– PUBLICKEY=id_rsa.pub
– CONFIG=vmx1.conf
– NUMCPUS=4
# – HDDIMAGE=/images/p1.qcow2 # if we want it to be persistent
volumes:
– $PWD/images:/images
– $PWD:/u:ro
– /var/run/docker.sock:/var/run/docker.sock
networks:
mgmt:
net-a:
net-b:
net-c:
net-d:
networks:
mgmt:
net-a:
net-b:
net-c:
net-d:
LikeLike
Where did you store your junos images relative to the docker-compose file? If they are in the same folder, you don’t need the /images volume line for each vmx and you can remove it. The images will be picked up via the volume line containing /u (which is the mount folder within the container).
It could also be you don’t have a folder images relative to the docker-compose file, which would trigger the error message seen.
LikeLike
i have the junos image in both the images folder and in the root of the OpenJNPR-Container-vMX folder as well (see below)
I also commented out line 27 (- $PWD/images:/images) in docker-compose.yml
but i still experience the volume name is too short error when i run docker-compose up -d
any ideas?
labadmin@node3:~/OpenJNPR-Container-vMX$ ls -l images/
total 1386120
-rw-r–r– 1 labadmin labadmin 1419378688 Nov 13 16:26 junos-vmx-x86-64-18.4R1.8.qcow2
-rw-rw-r– 1 labadmin labadmin 85 Nov 13 15:50 README.md
labadmin@node3:~/OpenJNPR-Container-vMX$ ls -l junos-vmx-x86-64-18.4R1.8.qcow2
-rw-r–r– 1 labadmin labadmin 1419378688 Dec 17 2018 junos-vmx-x86-64-18.4R1.8.qcow2
labadmin@node3:~/OpenJNPR-Container-vMX$ grep images docker-compose.yml
# – HDDIMAGE=/images/p1.qcow2 # if we want it to be persistent
# – $PWD/images:/images
# – $PWD/images:/images
# – HDDIMAGE=/images/p1.qcow2 # if we want it to be persistent
# – $PWD/images:/images
# – HDDIMAGE=/images/p1.qcow2 # if we want it to be persistent
# – $PWD/images:/images
LikeLike
Hmm, do you have a .env file in the folder by chance? Check that content. And check the docker (19.03.4) and docker-compose (using 1.23.2) version and of course the latest version on the repo https://github.com/juniper/OpenJNPR-Container-vMX. I just rerun those versions successfully.
LikeLike
Forgot to mention, I ran the actual version of docker-compose.yml from the repo unmodified.
LikeLike
finally managed to bring up the docker images after running this:
labadmin@node3:~/OpenJNPR-Container-vMX$ sudo PWD=”$(pwd)” docker-compose up -d
All my images are up but riot restarts in a loop see logs below
————————————————————————————————
————————————————————————————————
labadmin@node3:~/OpenJNPR-Container-vMX$ sudo docker-compose ps
WARNING: The PWD variable is not set. Defaulting to a blank string.
Name Command State Ports
————————————————————————————————
openjnprcontainervmx_vmx1_1 /launch.sh Up 0.0.0.0:32783->22/tcp, 0.0.0.0:32782->830/tcp
openjnprcontainervmx_vmx2_1 /launch.sh Up 0.0.0.0:32775->22/tcp, 0.0.0.0:32774->830/tcp
openjnprcontainervmx_vmx3_1 /launch.sh Up 0.0.0.0:32771->22/tcp, 0.0.0.0:32770->830/tcp
openjnprcontainervmx_vmx4_1 /launch.sh Up 0.0.0.0:32781->22/tcp, 0.0.0.0:32780->830/tcp
riot restarts in a loop despite the fact that am on bare metal:
————————————————————————————————
————————————————————————————————
labadmin@node3:~/OpenJNPR-Container-vMX$ lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 16
On-line CPU(s) list: 0-15
Thread(s) per core: 2
Core(s) per socket: 4
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 44
Model name: Intel(R) Xeon(R) CPU E5620 @ 2.40GHz
Stepping: 2
CPU MHz: 2399.973
CPU max MHz: 2395.0000
CPU min MHz: 1596.0000
BogoMIPS: 4799.91
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 12288K
NUMA node0 CPU(s): 0-3,8-11
NUMA node1 CPU(s): 4-7,12-15
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid dca sse4_1 sse4_2 popcnt aes lahf_lm epb pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid dtherm ida arat flush_l1d
————————————————————————————————
————————————————————————————————
root@openjnprcontainervmx_vmx1_1>
patching start_vmxt.sh …
use cpu 5 for Junos
patching riot.tgz …
patching file riot/nested_env.sh
Hunk #1 succeeded at 86 (offset 16 lines).
Hunk #2 FAILED at 83. Hunk #3 succeeded at 108 with fuzz 2 (offset 9 lines). 1 out of 3 hunks FAILED — saving rejects to file riot/nested_env.sh.rej
patching file riot/nested_env.sh
Hunk #1 succeeded at 91 with fuzz 2.
patching file riot/nested_env.sh
Hunk #1 FAILED at 118.
1 out of 1 hunk FAILED — saving rejects to file riot/nested_env.sh.rej
patching file riot/start_riot.sh
patching file riot/device_list.sh
Hunk #1 succeeded at 90 (offset 1 line).
Hunk #2 succeeded at 135 with fuzz 1 (offset 1 line).
patching done. Uploading riot_lnx.tgz to VCP … starting mpcsd sh: /usr/share/pfe/set_fips_optest.sh: No such file or directory
fpc.core.push.sh: no process found
mpc :
tnp_hello_tx: no process found
cat: /var/jnx/card/local/type: No such file or directory
tx_hello_tx: Failed to get card type defaulting to 0
cat: /var/jnx/card/local/slot: No such file or directory
tx_hello_tx: Failed to get card slot defaulting to 0
tnp_hello_tx: Board type 0
tnp_hello_tx: Board slot 0
Setting Up DPDK for Docker env 0x0BAA cat: /var/jnx/card/local/type: No such file or directory
grep: /etc/riot/init.conf: No such file or directory
Cannot get driver information: Operation not supported
/home/pfe/riot/build/app/riot -c 0xffff -n 2 –lcores=’0@0,1@1,2@2,3@3,4@4,5@5,6@6,7@7,8@8,9@9,10@10,11@11,12@12,13@13,14@14,15@15′ –log-level=5 –vdev ‘net_pcap0,iface=tunl0,mac=’ –vdev ‘net_pcap1,iface=eth1,mac=02:42:ac:15:
00:03’ –vdev ‘net_pcap2,iface=eth2,mac=02:42:ac:14:00:02’ –vdev ‘net_pcap3,iface=eth3,mac=02:42:ac:13:00:05’ –vdev ‘net_pcap4,iface=eth4,mac=02:42:ac:12:00:03′ –no-pci -m 1024 — –rx “(0,0,0,2),(1,0,1,2),(2,0,2,2),(3,0,3,2
),(4,0,4,2),” –tx “(0,2),(1,2),(2,2),(3,2),(4,2),” –w “3” –rpio “local,3000,3001” –hostif “local,3002” –bsz “(32,32),(32,32),(32,32)”
PMD: Cannot parse arguments: wrong key or value
params=
EAL: failed to initialize net_pcap0 device
EAL: Bus (vdev) probe failed.
EAL: FATAL: Cannot probe devices EAL: Cannot probe devices
restarting riot in 5 seconds …
/home/pfe/riot/build/app/riot -c 0xffff -n 2 –lcores=’0@0,1@1,2@2,3@3,4@4,5@5,6@6,7@7,8@8,9@9,10@10,11@11,12@12,13@13,14@14,15@15’ –log-level=5 –vdev ‘net_pcap0,iface=tunl0,mac=’ –vdev ‘net_pcap1,iface=eth1,mac=02:42:ac:15:
00:03’ –vdev ‘net_pcap2,iface=eth2,mac=02:42:ac:14:00:02’ –vdev ‘net_pcap3,iface=eth3,mac=02:42:ac:13:00:05’ –vdev ‘net_pcap4,iface=eth4,mac=02:42:ac:12:00:03’ –no-pci -m 1024 — –rx “(0,0,0,2),(1,0,1,2),(2,0,2,2),(3,0,3,2
),(4,0,4,2),” –tx “(0,2),(1,2),(2,2),(3,2),(4,2),” –w “3” –rpio “local,3000,3001” –hostif “local,3002” –bsz “(32,32),(32,32),(32,32)”
EAL: WARNING: Master core has no memory on local socket!
PMD: Cannot parse arguments: wrong key or value
params=
EAL: failed to initialize net_pcap0 device
EAL: Bus (vdev) probe failed. EAL: FATAL: Cannot probe devices
LikeLike
I see. Somehow $PWD wasn’t set to your local folder due to using sudo.
Regarding the RIOT restarts, you are running on a dual socket server, which is a bit more complicated. You have to fix cores (DPDK style) so each riot is limited to just one socket. A simple way doing so is creating the file vmxt.conf in the current folder and reference it in docker-compose:
vMX$ cat vmxt1.conf
eal_args=–lcores=1@1,2@2,3@3
mwiget@pilatus:~/Dropbox/git/OpenJNPR-Container-vMX$ grep vmxt1 docker-compose.yml
– VMXT=vmxt1.conf
And
vMX$ cat vmxt2.conf
eal_args=–lcores=4@4,5@5,6@6
vMX$ grep VMXT docker-compose.yml
– VMXT=vmxt1.conf
– VMXT=vmxt2.conf
This limits the cores used by RIOT. Currently you can only use cores from socket 0.
But it only worked for one vmx instance. I know, not what you want really.
mwiget@pilatus:~/Dropbox/git/OpenJNPR-Container-vMX$ ps ax|grep riot
25265 pts/0 S 0:00 sh /usr/share/pfe/start_dpdk_riot.sh 0x0BAA
25277 pts/0 S 0:00 sh start_riot.sh
25461 pts/0 Sl 0:16 /home/pfe/riot/build/app/riot -c 0xffff -n 2 –lcores=1@1,2@2,3@3 –log-level=5 –vdev net_pcap0,iface=eth1,mac=02:42:c0:a8:d0:02 –vdev net_pcap1,iface=eth2,mac=02:42:c0:a8:c0:03 –vdev net_pcap2,iface=eth3,mac=02:42:c0:a8:b0:03 –no-pci -m 1024 — –rx (0,0,0,2),(1,0,1,2),(2,0,2,2), –tx (0,2),(1,2),(2,2), –w 3 –rpio local,3000,3001 –hostif local,3002 –bsz (32,32),(32,32),(32,32)
25497 pts/0 S 0:00 sh /usr/share/pfe/start_dpdk_riot.sh 0x0BAA
25509 pts/0 S 0:00 sh start_riot.sh
26046 pts/0 R+ 0:00 grep –color=auto riot
But thought I’ll let you know why it is failing on your end. Any chance trying it out on a single socket compute first?
LikeLike
tried without lcores on a single CPU socket physical server and then with lcores on the same server.
still getting same errors below when riot tries to load (it restarts in a loop every 5 seconds)
not even 1 VMX boots up 😦
——
seriously feeling frustrated.
could you explain what riot is and where i can find more documentation?
so in your case you are saying one can only run 1 VMX instance per physical server due to the 1 VMX per CPU socket limitation you experienced?
——see the lcores passed to riot below——–
restarting riot in 5 seconds …
/home/pfe/riot/build/app/riot -c 0x3f -n 2 –lcores=1@1,2@2,3@3 –log-level=5 –vdev ‘net_pcap0,iface=tunl0,mac=’ –vdev ‘net_pcap1,iface=eth1,mac=02:42:ac:1a:00:03’ –vdev ‘net_pcap2,iface=eth2,mac=02:42:ac:19:00:02’ –vdev ‘net_pcap3,iface=eth3,mac=02:42:ac:18:00:05’ –vdev ‘net_pcap4,iface=eth4,mac=02:42:ac:17:00:03’ –no-pci -m 1024 — –rx “(0,0,0,2),(1,0,1,2),(2,0,2,2),(3,0,3,2),(4,0,4,2),” –tx “(0,2),(1,2),(2,2),(3,2),(4,2),” –w “3” –rpio “local,3000,3001” –hostif “local,3002” –bsz “(32,32),(32,32),(32,32)”
PMD: Cannot parse arguments: wrong key or value
params=
EAL: failed to initialize net_pcap0 device
EAL: Bus (vdev) probe failed.
EAL: FATAL: Cannot probe devices
EAL: Cannot probe devices
LikeLike
Sorry I can’t be of more help. I’m using single socket servers with Ubuntu and don’t experience these issues. The core limit used to work with some Junos releases, but apparently not anymore. I can only suggest to try running it either on single socket servers or look at commercial offerings to run Junos vMX amongst other vendors on https://www.tesuto.com/
RIOT is basically the PFE simulator.
LikeLike