How to have have funny, More than just a game "ALL-IN-ONE LAB Session"
The combination of Ubuntu 14.04 Virtual box, Vagrant, Openvswitch, LXC, Docker and Pox controller, gives you the opportunity to create a great lab environment on a single workstation.The solution I am outlining in this article is not intended for a production network, but for personal testing on a single host.. When I build a lab I like it to be isolated from any production network but also flexible enough to get Internet access from/to the lab.
Workflow Ubuntu 14.0.4 OS
I assume you already have Ubuntu 14.0.4 installed. Assign static ip address to it. In this example, network manager was disabled during os installation, make sure you disable the network manager before proceeding, the Ubuntu box is assign 192.168.10.253
nano /etc/network/interface
Workflow Virtualbox and Vagrant
VirtualBox is a powerful x86 and AMD64/Intel64 virtualization product for enterprise as well as home use. VirtualBox runs on Windows, Linux, Macintosh, and Solaris hosts and supports a large number of guest operating systems including but not limited to Windows (NT 4.0, 2000, XP, Server 2003, Vista, Windows 7, Windows 8), DOS/Windows 3.x, Linux (2.4, 2.6 and 3.x), Solaris and OpenSolaris, OS/2, and OpenBSD.
Start the Virtualbox installation
sudo apt-get install dpkg-dev virtualbox-dkms
from the Ubuntu software Center as below
looking for the latest version do the following
nano /etc/apt/sources.list
add at the last line
#virtualbox
deb http://download.virtualbox.org/virtualbox/debian trusty contrib
wget -q https://www.virtualbox.org/download/oracle_vbox.asc -O- | sudo apt-key add -
sudo apt-get update
sudo apt-get install virtualbox-4.3
Vagrant is a great open source software for configuring and deploying multiple development environments. It works on Linux, Mac OS X, or Windows and although by default it uses VirtualBox for managing the virtualization, it can be used with other providers such as VMware or AWS. With an easy-to-use workflow and focus on automation, Vagrant lowers development environment setup time, increases development/production. The advantage of using Vagrant is that by using a central place for configuration, you can deploy virtual private machines packed with all you need. Moreover, it allows team members to run multiple environments with the same exact configuration.
download and install vagrant package with the following command:
root@cloud:~# wget https://dl.bintray.com/mitchellh/vagrant/vagrant_1.7.2_x86_64.deb
root@cloud:~# dpkg -i vagrant_1.7.2_x86_64.deb
reconfigure the VirtualBox DKMS
root@cloud:~# dpkg-reconfigure virtualbox-dkms
install a box/"image" that can be used by multiple Vagrant environments later on. Run the following command to install the precise64 box from the Vagrant website:
root@cloud:~# vagrant box add precise64 http://files.vagrantup.com/precise64.box
==> box: Adding box 'precise64' (v0) for provider:
box: Downloading: http://files.vagrantup.com/precise64.box
==> box: Successfully added box 'precise64' (v0) for 'virtualbox'
for more boxe's
set up a project that will be deployed based on the precise64 box we just added to Vagrant.
Create a root directory for your project and navigate in it:
root@cloud:~# mkdir sdntest_project
root@cloud:~# cd sdntest_project/
root@cloud:~/sdntest_project# vagrant init create a Vagrantfile the central file for your project configuration
root@cloud:~/sdntest_project# nano Vagrantfile edit the Vagrantfile before deploying the guest machine using the box we added
line 15 change it to
config.vm.box = "precise64" Make sure we use the box we have download
In order to have host's to host communication, I’m going to provide each VM with a private ip address.line 28 uncomment and change the subnet to something suitable within your environment
config.vm.network "private_network", ip: "192.168.20.10"
deploy the guest machine with the following command:
root@cloud:~/sdntest_project# vagrant up
root@cloud:~/sdntest_project# vagrant ssh
Welcome to Ubuntu 12.04 LTS (GNU/Linux 3.2.0-23-generic x86_64)
* Documentation: https://help.ubuntu.com/
New release '14.04.2 LTS' available.
Run 'do-release-upgrade' to upgrade to it.
Welcome to your Vagrant-built virtual machine.
Last login: Fri Sep 14 06:23:18 2012 from 10.0.2.2
vagrant@precise64:~$
After you are done working with the guest machine, you can exit and go back to the host with the following command:
vagrant@precise64:~$ exit
logout
Connection to 127.0.0.1 closed.
gracefully shut down the guest operating system and power down the guest machine.
weed@cloud:~/sdntest_project$ vagrant halt
==> default: Attempting graceful shutdown of VM...
weed@cloud:~/sdntest_project$ vagrant destroy
default: Are you sure you want to destroy the 'default' VM? [y/N] y
Workflow Openvswitch
Open vSwitch is a production quality open source software switch
designed to be used as a vswitch in virtualized server
environments. A vswitch forwards traffic between different VMs on
the same physical host and also forwards traffic between VMs and
the physical network. Open vSwitch supports standard management
interfaces (e.g. sFlow, NetFlow, IPFIX, RSPAN, CLI), and is open to
programmatic extension and control using OpenFlow and the OVSDB
management protocol.
Open vSwitch can currently run on any Linux-based virtualization
platform (kernel 2.6.32 and newer), including: KVM, VirtualBox, Xen,
Xen Cloud Platform, XenServer. As of Linux 3.3 it is part of the
mainline kernel. The bulk of the code is written in platform-
independent C and is easily ported to other environments.
Open vSwitch is specially designed to make it easier to manage VM
network configuration and monitor state spread across many physical
hosts in dynamic virtualized environments.
root@cloud:~# apt-get install -y build-essential fakeroot debhelper autoconf automake libssl-dev graphviz python-all python-qt4 python-twisted-conch libtool git tmux vim python-pip python-paramiko python-sphinx uuid-runtime
root@cloud:~# pip install alabaster
root@cloud:~# wget http://openvswitch.org/releases/openvswitch-2.3.1.tar.gz
root@cloud:~# tar xf openvswitch-2.3.1.tar.gz
root@cloud:~# pushd openvswitch-2.3.1
~/openvswitch-2.3.1 ~
root@cloud:~/openvswitch-2.3.1# DEB_BUILD_OPTIONS='parallel=8 nocheck' fakeroot debian/rules binary
root@cloud:~/openvswitch-2.3.1# popd
~
root@cloud:~# dpkg -i openvswitch-common*.deb openvswitch-datapath-dkms*.deb python-openvswitch*.deb openvswitch-pki*.deb openvswitch-switch*.deb
root@cloud:~# rm -rf *openvswitch*
root@cloud:~# ovs-vsctl show
2f24aa69-eb1b-42a6-8f0d-5b1b42a9b447
ovs_version: "2.3.1"
Workflow LXC
root@cloud:~# aptitude -y install lxc
root@cloud:~# ovs-vsctl add-br virbr0
root@cloud:~# ovs-vsctl show
2f24aa69-eb1b-42a6-8f0d-5b1b42a9b447
Bridge "virbr0"
Port "virbr0"
Interface "virbr0"
type: internal
ovs_version: "2.3.1"
root@cloud:~# nano grean.sh
set -x
ovs-vsctl add-port virbr0 eth0
ifconfig virbr0 192.168.10.200 netmask 255.255.255.0
ifconfig eth0 0.0.0.0 up
route add default gw 192.168.10.10
+ ovs-vsctl add-port virbr0 eth0
+ ifconfig virbr0 192.168.10.200 netmask 255.255.255.0
+ ifconfig eth0 0.0.0.0 up
+ route add default gw 192.168.10.10
root@cloud:~# ls /usr/share/lxc/templates/
lxc-alpine lxc-archlinux lxc-centos lxc-debian lxc-fedora lxc-openmandriva lxc-oracle lxc-sshd lxc-ubuntu-cloud
lxc-altlinux lxc-busybox lxc-cirros lxc-download lxc-gentoo lxc-opensuse lxc-plamo lxc-ubuntu
root@cloud:~# lxc-create -n debian01 -t debian
Download complete.
Copying rootfs to /var/lib/lxc/debian01/rootfs...Generating locales (this might take a while)...
en_US.UTF-8... done
Generation complete.
update-rc.d: using dependency based boot sequencing
update-rc.d: using dependency based boot sequencing
update-rc.d: using dependency based boot sequencing
update-rc.d: using dependency based boot sequencing
Creating SSH2 RSA key; this may take some time ...
Creating SSH2 DSA key; this may take some time ...
Creating SSH2 ECDSA key; this may take some time ...
invoke-rc.d: policy-rc.d denied execution of restart.
Current default time zone: 'Africa/Dar_es_Salaam'
Local time is now: Wed Mar 11 02:43:58 EAT 2015.
Universal Time is now: Tue Mar 10 23:43:58 UTC 2015.
Root password is 'root', please change !
root@cloud:~# nano /var/lib/lxc/debian01/config
#Network
lxc.network.type = veth
lxc.network.flags = up
lxc.network.link = virbr0
lxc.network.name = eth0
lxc.network.mtu = 1500
root@cloud:~# lxc-start -n debian01 -d
Debian GNU/Linux 7 debian01 tty1
debian01 login:
root@cloud:~# ssh root@debian01.internal.labnet
The authenticity of host 'debian01.internal.labnet (192.168.10.57)' can't be established.
ECDSA key fingerprint is 4c:6c:fe:af:7f:5b:c1:bb:75:50:fc:36:8f:09:20:6a.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'debian01.internal.labnet,192.168.10.57' (ECDSA) to the list of known hosts.
root@debian01.internal.labnet's password:
Linux debian01 3.16.0-31-generic #41-Ubuntu SMP Tue Feb 10 15:24:04 UTC 2015 x86_64
The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.
Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
root@debian01:~#
root@debian01:~# exit
logout
Connection to debian01.internal.labnet closed.
root@cloud:~# lxc-stop -n debian01
Connection to debian01.internal.labnet closed.
root@cloud:~# lxc-stop -n debian01
root@cloud:~# aptitude -y install docker.io
root@cloud:~# docker pull centos
root@cloud:~# docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
<none> <none> 2103b00b3fdf 14 hours ago 188.3 MB
<none> <none> 0bc55ae673f7 6 days ago 202.6 MB
<none> <none> 861c710fef70 6 days ago 284.1 MB
<none> <none> f6808a3e4d9e 6 days ago 202.6 MB
<none> <none> 88f9454e60dd 6 days ago 210 MB
root@cloud:~# docker run 2103b00b3fdf /bin/bash -c "apt-get update; apt-get -y install openssh-server"
root@cloud:~# docker ps -a | head -2
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
81bd4aae7cc1 2103b00b3fdf "/bin/bash -c 'apt-g 2 hours ago Exited (0) 2 minutes ago kickass_pike
root@cloud:~# docker commit 81bd4aae7cc1 ubuntu_sshd
a3db452441ab9e02baa94343d17b93ad1b4fb04019b6b978efd8d66ae7efbd13
Workflow Docker
root@cloud:~# docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
ubuntu_sshd latest a3db452441ab 38 seconds ago 261.9 MB
<none> <none> 0bc55ae673f7 6 days ago 202.6 MB
<none> <none> 861c710fef70 6 days ago 284.1 MB
<none> <none> f6808a3e4d9e 6 days ago 202.6 MB
<none> <none> 88f9454e60dd 6 days ago 210 MB
<none> <none> 6cfa4d1f33fb 10 months ago 0 B
root@cloud:~# docker run ubuntu_sshd /usr/bin/which sshd
/usr/sbin/sshd
root@cloud:~# docker run -it -p 8081:22 ubuntu_sshd /bin/bash
root@b2118a50b612:/# adduser weed
root@b2118a50b612:/# adduser weed
Adding user `weed' ...
Adding new group `weed' (1000) ...
Adding new user `weed' (1000) with group `weed' ...
Creating home directory `/home/weed' ...
Copying files from `/etc/skel' ...
Enter new UNIX password:
Retype new UNIX password:
passwd: password updated successfully
Changing the user information for weed
Enter the new value, or press ENTER for the default
Full Name []:
Room Number []:
Work Phone []:
Home Phone []:
Other []:
Is the information correct? [Y/n] y
root@cloud:~# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b2118a50b612 ubuntu_sshd:latest "/bin/bash" 3 minutes ago Up 3 minutes 0.0.0.0:8081->22/tcp elegant_sammet
Workflow Pox
root@cloud:~# git clone http://github.com/noxrepo/pox
root@cloud:~# pushd pox
root@cloud:~# git branch
root@cloud:~# git checkout dart