Search This Blog

There was an error in this gadget

Friday, April 25, 2008

Case Study: How to discover iscsi targets with linux-iscsi initiator package -- Suse linux 9 (scsi Initiator) and openfiler (scsi target)

Birds eye view of the end configuration...


Here is the bird's eye view of the end configuration that we will achieve at the end of this article:

Click here for a bigger view..



My Grumblings with openfiler..


Those of you who have started experimented with openfiler may have started liking its features already. One of the biggest concerns that I have with openfiler is that it's administrative GUI is full of bugs or makes a lot of assumptions while working.

For example, it wouldn't show up logical volumes which were created in a iscsi external hard disk from another installation of openfiler virtual machine.

Somehow, CLI commands like pvscan, lvscan and vgscan are able to discover previously created physcial volumes, logical volumes and volume groups; but the front end GUI (http://<openfiler IP>:446) fails to do the same.

Although there is another open source product called freenas, I resisted to temptation to switch my loyalties too soon as no product is without bugs as such.

My requirement..


Anyways, my real requirement was to create a homegrown 10g RAC cluster using virtualbox virtual machines. For better or worse, I had chosen Suse Linux 9 (SP3) as the base operating system for the 10g RAC installation. A great reason was that many big customers like officedepot have chosen to implement 10g RAC on Suse Linux 9.3 . As time goes by, I feel that SuSE linux will become a more popular platform. Hence my persistence with this distribution.
There are ways to spoof the 10g RAC installation with 1 node too, but I wanted to simulate the real thing and be able to drive a Train-The-Trainer session for my teammates.

Looking back at my initial struggles...


I now realize that figuring out how to discover iscsi targets on Ubuntu was much easier. The experience is documented here: Combining Openfiler and Virtualbox (Ubuntu guest OS on windows host

My initial struggles were torn with anguish, especially because I realized very soon that I could not use open-iscsi linux package with SuSE linux 9 (2.6.5-7-244 kernel) at all. SuSE linux 10.x seems to have great support for it, though. This is simply because open-iscsi package works with kernels 2.6.14 and above only.

Tough luck there.

So, whats available for SuSE Linux 9 if you want to discover iscsi target devices?


Well, there are some options. The linux-iscsi package is very much available and with a little configuration, which is quite simplistic, it works great. A lot of people tried to woo me with other distributions like Oracle Enterprise Linux 5, which has iscsi-initiator-utils package built into it, but I stuck to my ground.
Here are some important distinctions between linux-iscsi and open-iscsi:

- The linux-iscsi package (aka iscsi-sfnet) reads /etc/iscsi.conf
- The open-iscsi package reads /etc/iscsid.conf. This package has an additional iscsiadm utility for discovering targets.


As of now, linux-iscsi and open-iscsi projects have been merged (as from their announcement) into one open-iscsi project.

Now, the difficult part: figuring out the setup ..


The most difficult part was figuring out the setup that worked. Eventually, after umpteen tries, it did work. On more than one occasion, I thought if it was even worth trying linux-iscsi initiator package with openfiler as iscsi target, iscsi-target drivers seemed more compatible with open-iscsi initiator package (this was the Ubuntu experience dominating my thinking).

However, I now realize that this perception was delusional. All I really needed was to a proper configuration of linux-iscsi package as iscsi initiator.
i will assume that the reader is conversant with the terms iscsi iniatiator/target.
If not, here is a crash course: iscsi targets are the LUNs or logical volumes in your NAS device , iscsi initiator is the client machine which wants to use these LUNs or Logical volumes. You dig?

With debug level 10 of iscsid process (# iscsid -d 10 &), i was getting the following error while discovering targets:
.. >> iscsid[17946]: connecting to 10.143.213.233:446
.. >> iscsid[17946]: connected local port 33785 to 10.143.213.233:446
.. >> iscsid[17946]: discovery session to 10.143.213.233:446 starting iSCSI login on fd 1
.. >> iscsid[17946]: sending login PDU with current stage 1, next stage 3, transit 0x80, isid 0x00023d000001
.. >> iscsid[17946]: >   InitiatorName=iqn.1987-05.com.cisco:01.51f06557c68
.. >> iscsid[17946]: >   InitiatorAlias=raclinux1
.. >> iscsid[17946]: >   SessionType=Discovery
.. >> iscsid[17946]: >   HeaderDigest=None
.. >> iscsid[17946]: >   DataDigest=None
.. >> iscsid[17946]: >   MaxRecvDataSegmentLength=8192
.. >> iscsid[17946]: >   X-com.cisco.PingTimeout=5
.. >> iscsid[17946]: >   X-com.cisco.sendAsyncText=Yes
.. >> iscsid[17946]: >   X-com.cisco.protocol=draft20
.. >> iscsid[17946]: wrote 48 bytes of PDU header
.. >> iscsid[17946]: wrote 248 bytes of PDU data
.. >> iscsid[17946]: socket 1 closed by target
.. >> iscsid[17946]: login I/O error, failed to receive a PDU
.. >> iscsid[17946]: retrying discovery login to 10.143.213.233
.. >> iscsid[17946]: disconnecting session 0x80b4890, fd 1
.. >> iscsid[17946]: discovery session to 10.143.213.233:446 sleeping for 2 seconds before next login attempt

I saw light at the end of the tunnel after trying a simple setup mentioned in http://www-941.ibm.com/collaboration/wiki/display/LinuxP/iSCSI

Lets talk about the experience in more detail now.

The setup on iscsi target (Openfiler) side..


[root@openfiler~]# uname -a
Linux openfiler.usdhcp.example.com 2.6.19.4-0.1.x86.i686.cmov #1 ..

I did not setup a network or subnet of allowed initiators for LUNs (as
can be seen here that the /etc/initiators.allow and /etc/initiators.deny files are
non-existent):
[root@openfiler~]# ls /etc/initiators.allow
ls: /etc/initiators.allow: No such file or directory 

[root@openfiler~]# ls /etc/initiators.deny
ls: /etc/initiators.deny: No such file or directory 

[root@openfiler~]# more /etc/ietd.conf
Target iqn.2006-01.com.openfiler:openfiler.testLun0Path=/dev/openfiler/test,Type=fileio

[root@openfiler~]# service iscsi-target status
ietd (pid 4164) is running...

Checking if the device drivers are loaded:
[root@openfiler~]# lsmod | grep scsi
iscsi_trgt             61788  4
scsi_mod              111756  2 sd_mod,usb_storage

Checking if the NAS device is discovered:
[root@openfiler~]# more /proc/scsi/scsi
Attached devices:
Host: scsi0 Channel: 00 Id: 00 Lun: 00
Vendor: ST332083 Model: 3A               Rev: 3.AA
Type:   Direct-Access                    ANSI SCSI revision: 02

Checking what logical volumes have been discovered:
[root@openfiler~]# cat /proc/net/iet/session
tid:1 name:iqn.2006-01.com.openfiler:openfiler.test

Discover the volume groups, logical volumes and physical volumes:
[root@openfiler~]# vgscan
Reading all physical volumes.  This may take a while...
Found volume group "openfiler" using metadata type lvm2

[root@openfiler~]# lvscan
ACTIVE            '/dev/openfiler/ocr' [1.00 GB] inherit
ACTIVE            '/dev/openfiler/vote' [1.00 GB] inherit
ACTIVE            '/dev/openfiler/asm' [60.00 GB] inherit
ACTIVE            '/dev/openfiler/test' [32.00 MB] inherit

[root@openfiler~]# pvscan
PV /dev/sda2   VG openfiler   lvm2 [122.30 GB / 60.27 GB free]
Total: 1 [122.30 GB] / in use: 1 [122.30 GB] / in no VG: 0 [0   ]

As you can see, there are actually three more logical volumes that were discovered than
what we have configured in /etc/ietd.conf. We will deal with this later:
[root@openfiler~]# ls -l /dev/openfiler
total 0
lrwxrwxrwx  1 root root 25 Apr 24 09:58 asm -> /dev/mapper/openfiler-asm
lrwxrwxrwx  1 root root 25 Apr 24 09:58 ocr -> /dev/mapper/openfiler-ocr
lrwxrwxrwx  1 root root 26 Apr 24 12:07 test -> /dev/mapper/openfiler-test
lrwxrwxrwx  1 root root 26 Apr 24 09:59 vote -> /dev/mapper/openfiler-vote

The real deal-iscsi Initiator setup using linux-iscsi package on Suse Linux 9.3


raclinux1:~ # uname -a
Linux raclinux1 2.6.5-7.244-default #1 Mon Dec 12 18:32:25 UTC 2005 i686 i686 i386 GNU/Linux

Make sure the linux-iscsi package is installed:
raclinux1:/etc # rpm -qa | grep linux-iscsi
linux-iscsi-4.0.1-98

Show the discovered iscsi devices as of yet:
raclinux1:/etc # iscsi-ls
###############################################################################

iSCSI driver is not loaded

###############################################################################

Since the iscsi driver is missing, Load the iscsi driver (which is also known as the iscsi-Sfnet driver)
raclinux1:/etc # modprobe iscsi

Verify that the iscsi driver was loaded:
raclinux1:/etc # lsmod | grep scsi
iscsi                 182192  0
scsi_mod              112972  5 iscsi,sg,st,sd_mod,sr_mod
raclinux1:/etc #

Check what devices have been configured. Right now, no iscsi devices have been discovered:
raclinux1:/etc # iscsi-ls
*******************************************************************************
Cisco iSCSI Driver Version ... 4.0.198 ( 21-May-2004 )
*******************************************************************************
raclinux1:/etc #

Configure the /etc/iscsi.conf file for linux-iscsi - the most simplistic case -- This is SO the key.
Trivia
Initially, I had given port 446 in the DiscoveryAddress too and that was causing a very cryptic 'login I/O error, failed to receive a PDU' error.


I had searched all over the internet to resolve this error, including openfiler forums, only to find out that a few people resolved this by doing a firmware upgrade! Unfortunately, there is very little literature on the internet on this error. That is why I hope this article helps someone out there facing the same situation.

raclinux1:~ # more /etc/iscsi.conf
# this is the IP of the openfiler iscsi target machine
DiscoveryAddress=10.143.213.233

Verify that we have a unique IQN name for the initiator node (SuSE Linux 9.3):
raclinux1:~ # more /etc/initiatorname.iscsi
## DO NOT EDIT OR REMOVE THIS FILE!
## If you remove this file, the iSCSI daemon will not start.
## If you change the InitiatorName, existing access control lists
## may reject this initiator.  The InitiatorName must be unique
## for each iSCSI initiator.  Do NOT duplicate iSCSI InitiatorNames.
InitiatorName=iqn.1987-05.com.cisco:01.51f06557c68

Now, start up iscsid process with a high debug level to see what goes on behind the scenes.
I chose debug level 10 for no particular reason:
raclinux1:/etc # iscsid -d 10 &
[1] 30332
raclinux1:/etc # 1209056895.780916 >> iscsid[30332]: iSCSI debug level 10
1209056895.781428 >> iscsid[30332]: InitiatorName=iqn.1987-05.com.cisco:01.51f06557c68
1209056895.781790 >> iscsid[30332]: InitiatorAlias=raclinux1
1209056895.782101 >> iscsid[30332]: version 4.0.198 ( 21-May-2004)
1209056895.785327 >> iscsid[30333]: pid file fd 0
1209056895.785694 >> iscsid[30333]: locked pid file /var/run/iscsid.pid
1209056895.795251 >> iscsid[30333]: updating config 0xbfffeb10 from /etc/iscsi.conf
...
....
1209056895.799724 >> iscsid[30334]: sendtargets discovery process 0x80a80c0 starting, address 10.143.213.233:3260, continuous 1
1209056895.800315 >> iscsid[30334]: sendtargets discovery process 0x80a80c0 to 10.143.213.233:3260 using isid 0x00023d0000011209056895.802181 >> iscsid[30334]: connecting to 10.143.213.233:3260
1209056895.803657 >> iscsid[30334]: connected local port 34261 to 10.143.213.233:3260
1209056895.804189 >> iscsid[30334]: discovery session to 10.143.213.233:3260 starting iSCSI login on fd 1
1209056895.805081 >> iscsid[30334]: sending login PDU with current stage 1, next stage 3, transit 0x80, isid 0x00023d000001
1209056895.805415 >> iscsid[30334]: >    InitiatorName=iqn.1987-05.com.cisco:01.51f06557c68
1209056895.805807 >> iscsid[30334]: >    InitiatorAlias=raclinux1
1209056895.806120 >> iscsid[30334]: >    SessionType=Discovery
1209056895.806535 >> iscsid[30334]: >    HeaderDigest=None
1209056895.806918 >> iscsid[30334]: >    DataDigest=None
1209056895.807213 >> iscsid[30334]: >    MaxRecvDataSegmentLength=8192
1209056895.807515 >> iscsid[30334]: >    X-com.cisco.PingTimeout=5
1209056895.807910 >> iscsid[30334]: >    X-com.cisco.sendAsyncText=Yes
1209056895.808217 >> iscsid[30334]: >    X-com.cisco.protocol=draft20
1209056895.808555 >> iscsid[30334]: wrote 48 bytes of PDU header
1209056895.809044 >> iscsid[30334]: wrote 248 bytes of PDU data
1209056895.810896 >> iscsid[30333]: done starting discovery processes
...
...
1209056895.825881 >> iscsid[30334]: discovery login success to 10.143.213.233
1209056895.800928 >> iscsid[30334]: resolved 10.143.213.233 to 10.4294967183.4294967253.4294967273
...
...
TargetName=iqn.2006-01.com.openfiler:openfiler.test
1209056895.831110 >> iscsid[30334]: >    TargetAddress=10.143.213.233:3260,1
1209056895.831416 >> iscsid[30334]: discovery session to 10.143.213.233:3260 received text response, 88 data bytes, ttt 0xffffffff, final 0x80
...
...
1209056895.849821 >> iscsid[30333]: mkdir /var/lib
1209056895.850134 >> iscsid[30333]: mkdir /var/lib/iscsi
1209056895.850439 >> iscsid[30333]: opened bindings file /var/lib/iscsi/bindings
1209056895.850769 >> iscsid[30333]: locked bindings file /var/lib/iscsi/bindings
1209056895.851143 >> iscsid[30333]: scanning bindings file for 1 unbound sessions
1209056895.851580 >> iscsid[30333]: iSCSI bus 0 target 0 bound to session #1 to iqn.2006-01.com.openfiler:openfiler.test
1209056895.851906 >> iscsid[30333]: done scanning bindings file at line 11
1209056895.852320 >> iscsid[30333]: unlocked bindings file /var/lib/iscsi/bindings

Voila! A new virtual disk is discovered!



Paydirt! The iscsi targets are detected as per messages in /var/log/messages


iSCSI: 4.0.188.26 ( 21-May-2004) built for Linux 2.6.5-7.244-default
iSCSI: will translate deferred sense to current sense on disk command responses
iSCSI: control device major number 254 scsi15 : SFNet iSCSI driver
iSCSI:detected HBA host #15 iSCSI:
bus 0 target 0 = iqn.2006-01.com.openfiler:openfiler.test
iSCSI: bus 0 target 0 portal 0 = address
10.143.213.233 port 3260 group 1iSCSI: bus 0 target 0 established session #1, portal
0, address 10.143.213.233 port 3260 group 1
Vendor: Openfile  Model: Virtual disk      Rev: 0
Type:   Direct-Access                      ANSI SCSI
revision: 04
SCSI device sda: 65536 512-byte hdwr sectors (34 MB)
iSCSI: starting timer thread at 11948918
iSCSI: bus 0 target 0 trying to establish session to
portal 0, address 10.143.213.233 port 3260 group 1
SCSI device sda: drive cache: write through
sda: unknown partition table
Attached scsi disk sda at scsi15, channel 0, id 0, lun 0
Attached scsi generic sg0 at scsi15, channel 0, id 0, lun 0,  type 0
md: Autodetecting RAID arrays.
md: autorun ...
md: ... autorun DONE.

Verifying that the discovery of target LUNs was indeed made:
raclinux1:~ # more /var/lib/iscsi/bindings
# iSCSI bindings, file format version 1.0.
# NOTE: this file is automatically maintained by the iSCSI daemon.
# You should not need to edit this file under most circumstances.
# If iSCSI targets in this file have been permanently deleted, you
# may wish to delete the bindings for the deleted targets.
#
# Format:
# bus   target  iSCSI
# id    id      TargetName
#
0       0       iqn.2006-01.com.openfiler:openfiler.test

Lets restart the linux-iscsi service again (without the debug this time):
raclinux1:/etc # rciscsi stop
Stopping iSCSI: sync umount sync iscsid

raclinux1:/etc # rciscsi start
Starting iSCSI: iscsi iscsid fsck/mount     done

raclinux1:/etc # rciscsi status
Checking for service iSCSI iSCSI driver is loaded
running

Check what devices were discovered:
raclinux1:/etc # iscsi-ls
*******************************************************************************
Cisco iSCSI Driver Version ... 4.0.198 ( 21-May-2004 )
*******************************************************************************
TARGET NAME             : iqn.2006-01.com.openfiler:openfiler.test
TARGET ALIAS            :
HOST NO                 : 18
BUS NO                  : 0
TARGET ID               : 0
TARGET ADDRESS          : 1.1.3923087114.0:0
SESSION STATUS          : DROPPED AT Thu Apr 24 10:19:16 2008
NO. OF PORTALS          : 1
Segmentation fault

raclinux1:/etc # fdisk -l /dev/sda

Disk /dev/sda: 33 MB, 33554432 bytes
2 heads, 32 sectors/track, 1024 cylinders
Units = cylinders of 64 * 512 = 32768 bytes

Disk /dev/sda doesn't contain a valid partition table

You can now partition the iscsi device using fdisk:
raclinux1:/etc # fdisk /dev/sda
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel. Changes will remain in memory only,
until you decide to write them. After that, of course, the previous
content won't be recoverable.

Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

Command (m for help): n
Command action
e   extended
p   primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-1024, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-1024, default 1024):
Using default value 1024

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.
raclinux1:/etc #

raclinux1:/etc # fdisk -l /dev/sda

Disk /dev/sda: 33 MB, 33554432 bytes
2 heads, 32 sectors/track, 1024 cylinders
Units = cylinders of 64 * 512 = 32768 bytes

Device Boot      Start         End      Blocks   Id  System
/dev/sda1               1        1024       32752   83  Linux

raclinux1:/etc # ls -l /dev/disk
total 132
drwxr-xr-x   4 root root   4096 Apr 10 08:44 .
drwxr-xr-x  33 root root 118784 Apr 24 10:19 ..
drwxr-xr-x   2 root root   4096 Apr 24 10:21 by-id
drwxr-xr-x   2 root root   4096 Apr 24 10:21 by-path

raclinux1:/etc # ls -l /dev/disk/by-id
total 8
...
.. iscsi-iqn.2006-01.com.openfiler:openfiler.test-0 -> ../../sda
.. iscsi-iqn.2006-01.com.openfiler:openfiler.test-0-generic -> ../../sg0
.. iscsi-iqn.2006-01.com.openfiler:openfiler.test-0p1 -> ../../sda1

raclinux1:/etc # ls -l /dev/disk/by-path
total 8
.. ip-10.143.213.233-iscsi-iqn.2006-01.com.openfiler:openfiler.test-0 -> ../../sda
.. ip-10.143.213.233-iscsi-iqn.2006-01.com.openfiler:openfiler.test-0-generic -> ../../sg0
.. ip-10.143.213.233-iscsi-iqn.2006-01.com.openfiler:openfiler.test-0p1 -> ../../sda1
...

In the meanwhile, lets look at the sessions on openfiler server:


[root@openfiler~]# cat /proc/net/iet/session
tid:1 name:iqn.2006-01.com.openfiler:openfiler.test
sid:564049469047296 initiator:iqn.1987-05.com.cisco:01.51f06557c68
cid:0 ip:10.143.213.238 state:active hd:none dd:none

[root@openfiler~]# more /proc/net/iet/*
::::::::::::::
/proc/net/iet/session
::::::::::::::
tid:1 name:iqn.2006-01.com.openfiler:openfiler.test
sid:564049469047296 initiator:iqn.1987-05.com.cisco:01.51f06557c68
cid:0 ip:10.143.213.238 state:active hd:none dd:none
::::::::::::::
/proc/net/iet/session.xml
::::::::::::::
<?xml version="1.0" ?>

<info>

<target id="1" name="iqn.2006-01.com.openfiler:openfiler.test">
<session id="564049469047296" initiator="iqn.1987-05.com.cisco:01.51f06557c68">
<connection id="0" ip="10.143.213.238" state="active" hd="none" dd="none" />
</session>
</target>

</info>

::::::::::::::
/proc/net/iet/volume
::::::::::::::
tid:1 name:iqn.2006-01.com.openfiler:openfiler.test
lun:0 state:0 iotype:fileio iomode:wt path:/dev/openfiler/test
::::::::::::::
/proc/net/iet/volume.xml
::::::::::::::
<?xml version="1.0" ?>

<info>

<target id="1" name="iqn.2006-01.com.openfiler:openfiler.test">
<lun number="0" state= "0"iotype="fileio"iomode="wt" path="/dev/openfiler/test" />
</target>

</info>

Adding all the discovered LUNs to openfiler's published iscsi targets:


[root@openfiler~]# more /etc/ietd.conf
Target iqn.2006-01.com.openfiler:openfiler.test
Lun 0 Path=/dev/openfiler/test,Type=fileio
Targetiqn.2006-01.com.openfiler:openfiler.asm
Lun 1 Path=/dev/openfiler/asm,Type=fileio
Targetiqn.2006-01.com.openfiler:openfiler.ocr
Lun 2 Path=/dev/openfiler/ocr,Type=fileio
Targetiqn.2006-01.com.openfiler:openfiler.vote
Lun 3 Path=/dev/openfiler/vote,Type=fileio

[root@openfiler~]# service iscsi-target restart
Stopping iSCSI target service:                             [  OK  ]
Starting iSCSI target service:                             [  OK  ]

[root@openfiler~]# more /proc/net/iet/*
::::::::::::::
/proc/net/iet/session
::::::::::::::
tid:4 name:iqn.2006-01.com.openfiler:openfiler.vote
sid:282574492336640 initiator:iqn.1987-05.com.cisco:01.51f06557c68
cid:0 ip:10.143.213.238 state:active hd:none dd:none
tid:3 name:iqn.2006-01.com.openfiler:openfiler.ocr
sid:564049469047296 initiator:iqn.1987-05.com.cisco:01.51f06557c68
cid:0 ip:10.143.213.238 state:active hd:none dd:none
tid:2 name:iqn.2006-01.com.openfiler:openfiler.asm
sid:845524445757952 initiator:iqn.1987-05.com.cisco:01.51f06557c68
cid:0 ip:10.143.213.238 state:active hd:none dd:none
tid:1 name:iqn.2006-01.com.openfiler:openfiler.test
sid:1126999422468608 initiator:iqn.1987-05.com.cisco:01.51f06557c68
cid:0 ip:10.143.213.238 state:active hd:none dd:none
::::::::::::::
/proc/net/iet/session.xml
::::::::::::::
<?xml version="1.0" ?>

<info>

<target id="4" name="iqn.2006-01.com.openfiler:openfiler.vote">
<session id="282574492336640" initiator="iqn.1987-05.com.cisco:01.51f06557c68">
<connection id="0" ip="10.143.213.238" state="active" hd="none" dd="none" />
</session>
</target>

<target id="3" name="iqn.2006-01.com.openfiler:openfiler.ocr">
<session id="564049469047296" initiator="iqn.1987-05.com.cisco:01.51f06557c68">
<connection id="0" ip="10.143.213.238" state="active" hd="none" dd="none" />
</session>
</target>

<target id="2" name="iqn.2006-01.com.openfiler:openfiler.asm">
<session id="845524445757952" initiator="iqn.1987-05.com.cisco:01.51f06557c68">
<connection id="0" ip="10.143.213.238" state="active" hd="none" dd="none" />
</session>
</target>

<target id="1" name="iqn.2006-01.com.openfiler:openfiler.test">
<session id="1126999422468608" initiator="iqn.1987-05.com.cisco:01.51f06557c68">
<connection id="0" ip="10.143.213.238" state="active" hd="none" dd="none" />
</session>
</target>

</info>

::::::::::::::
/proc/net/iet/volume
::::::::::::::
tid:4 name:iqn.2006-01.com.openfiler:openfiler.vote
lun:0 state:0 iotype:fileio iomode:wt path:/dev/openfiler/asm
tid:3 name:iqn.2006-01.com.openfiler:openfiler.ocr
lun:0 state:0 iotype:fileio iomode:wt path:/dev/openfiler/asm
tid:2 name:iqn.2006-01.com.openfiler:openfiler.asm
lun:0 state:0 iotype:fileio iomode:wt path:/dev/openfiler/asm
tid:1 name:iqn.2006-01.com.openfiler:openfiler.test
lun:0 state:0 iotype:fileio iomode:wt path:/dev/openfiler/test
::::::::::::::
/proc/net/iet/volume.xml
::::::::::::::
<?xml version="1.0" ?>

<info>

<target id="4" name="iqn.2006-01.com.openfiler:openfiler.vote">
<lun number="0" state= "0"iotype="fileio"iomode="wt" path="/dev/openfiler/vote" />
</target>

<target id="3" name="iqn.2006-01.com.openfiler:openfiler.ocr">
<lun number="0" state= "0"iotype="fileio"iomode="wt" path="/dev/openfiler/ocr" />
</target>

<target id="2" name="iqn.2006-01.com.openfiler:openfiler.asm">
<lun number="0" state= "0"iotype="fileio"iomode="wt" path="/dev/openfiler/asm" />
</target>

<target id="1" name="iqn.2006-01.com.openfiler:openfiler.test">
<lun number="0" state= "0"iotype="fileio"iomode="wt" path="/dev/openfiler/test" />
</target>

</info>

In the meanwhile, On the initiator:


Now, let us check the devices detected (the iscsi-device command works more reliably):
raclinux1:/etc # iscsi-device /dev/sda
/dev/sda: 0   0   0       10.143.213.233   3260  iqn.2006-01.com.openfiler:openfiler.test
raclinux1:/etc # iscsi-device /dev/sdb
/dev/sdb: 0   1   0       10.143.213.233   3260  iqn.2006-01.com.openfiler:openfiler.asm
raclinux1:/etc # iscsi-device /dev/sdc
/dev/sdc: 0   2   0       10.143.213.233   3260  iqn.2006-01.com.openfiler:openfiler.vote
raclinux1:/etc # iscsi-device /dev/sdd
/dev/sdd: 0   3   0       10.143.213.233   3260  iqn.2006-01.com.openfiler:openfiler.ocr

raclinux1:/etc # fdisk -l /dev/sd*

Disk /dev/sda: 33 MB, 33554432 bytes
2 heads, 32 sectors/track, 1024 cylinders
Units = cylinders of 64 * 512 = 32768 bytes

Device Boot      Start         End      Blocks   Id  System
/dev/sda1               1        1024       32752   83  Linux

Disk /dev/sda1: 33 MB, 33538048 bytes
2 heads, 32 sectors/track, 1023 cylinders
Units = cylinders of 64 * 512 = 32768 bytes

Disk /dev/sdb: 64.4 GB, 64424509440 bytes
64 heads, 32 sectors/track, 61440 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes

Disk /dev/sdb doesn't contain a valid partition table

Disk /dev/sdc: 64.4 GB, 64424509440 bytes
64 heads, 32 sectors/track, 61440 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes

Disk /dev/sdc doesn't contain a valid partition table

Disk /dev/sdd: 64.4 GB, 64424509440 bytes
64 heads, 32 sectors/track, 61440 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes

Disk /dev/sdd doesn't contain a valid partition table

raclinux1:/etc # more /var/lib/iscsi/bindings
# iSCSI bindings, file format version 1.0.
# NOTE: this file is automatically maintained by the iSCSI daemon.
# You should not need to edit this file under most circumstances.
# If iSCSI targets in this file have been permanently deleted, you
# may wish to delete the bindings for the deleted targets.
#
# Format:
# bus   target  iSCSI
# id    id      TargetName
#
0       0       iqn.2006-01.com.openfiler:openfiler.test
0       1       iqn.2006-01.com.openfiler:openfiler.asm
0       2       iqn.2006-01.com.openfiler:openfiler.vote
0       3       iqn.2006-01.com.openfiler:openfiler.ocr

************************************************************************************************
Caveat:

Somehow, the iscsi-ls utility was not working. However, the devices were accesible all right.

Instead, the iscsi-device command works beautifully.
************************************************************************************************

raclinux1:/etc # iscsi-ls
*******************************************************************************
Cisco iSCSI Driver Version ... 4.0.198 ( 21-May-2004 )
*******************************************************************************
TARGET NAME             : iqn.2006-01.com.openfiler:openfiler.test
TARGET ALIAS            :
HOST NO                 : 20
BUS NO                  : 0
TARGET ID               : 0
TARGET ADDRESS          : 1.1.3923087114.0:0
SESSION STATUS          : DROPPED AT Thu Apr 24 10:43:41 2008
NO. OF PORTALS          : 1
Segmentation fault

raclinux1:/etc # echo $?
139

Conclusion..


This proves that linux-iscsi package can be made to work on 2.6.5-7.x distributions or for any other linux distribution less than 2.6.14. So if open-iscsi was not meant to compile on your distribution, do not despair. There are other avenues. This article also serves to demonstrate how to use the command line interface of the openfiler product better, as compared to the GUI console.

It also professes that there are some caveats in openfiler, but if we know our way around them, life is good.

Virtualbox How to: Gotchas involved in making a USB external hard disk device work on windows host

Preface


Sometime back, I installed Openfiler on a external hard disk (manufacturer: Iomega)that I had at home (running Windows Vista Home Premium). This was meant to be a proof of concept and pretty soon, I had to bring it to office desktop (running Windows XP SP2) to make it work with some other virtualbox virtual machines that I had built in my spare time.

Troubles in recognizing the USB external hard disk on windows XP


However, the problem that I faced on Windows XP SP2 host operating system was that the openfiler Virtual machine (using virtualbox 1.5.6 of course) would not recognize the USB external hard disk at all! The same USB device was working fine on windows vista.

After struggling with this problem for a while, I found a quick workaround (Not sure if it is documented in the user manual or not) and also tested some reset steps on windows, which seem to work with the Iomega external USB hard disk.

These steps should also be relevant to other hard disk models in general.

Some more details...


In my case, windows had detected the USB drive and had even installed the Virtualbox driver for it. USB support was enabled for the openfiler virtual machine in the settings. The external hard disk was selected, but still no iscsi drive showed up on doing a fdisk -l
When to know a device is being accessed..

Well, if the device makes a whirring noise and the light blinks or comes on, it is enough to know that the device is being accessed internally and something good happened in terms of making the USB device work.

The setup that worked..


However, this setup worked after a little tinkering:

Firstly, make sure the USB controller and EHCI controller is enabled. Then choose the USB external hard disk device from the list of all USB devices and enabled it.

Click here for a bigger view



Make sure no other Virtual machine is accessing the same USB device. From what I have seen, if the same device is accessed by other virtualbox VMs, an exclusive lock is acquired on it, which makes it difficult to be used by other virtual machines.
This is true with windows too. Windows will stop seeing the USB hard disk when it is being used by a virtualbox virtual machine. At least, I have seen this behaviour with virtualbox 1.5.6

Click here for a bigger view



Make sure that Windows is able to detect the USB device first. If not, then you need to troubleshoot that first. Usually, powering off the device, unplugging it from the USB port and restarting the windows PC/desktop/laptop works great. (Hey, its windows).

You want to see something like this in your system tray to be sure of this.



Now you know for a fact that windows, your guest Operating system, detected the USB drive fine. So now, there is a good chance that virtualbox will detect it too.

Fire it up..


Power up the virtual machine and check if the USB device got detected. A simple fdisk -l will show it as a iscsi device, with names like /dev/sda, /dev/sdb or so. If you do not see anything of the sort, you can be certain that the device did not get detected. Another visual way to know is to see if the light on USB device comes up or not. No light, no access.

The manual workaround if the device is still now seen after powering up the virtual machine ..


However, there is still another trick up our sleeve that can be tried.

Click here for a bigger view



At the bottom message bar of the virtual machine, right click on the USB plug icon. You will see that a list of the detected USB devices comes up. Select the external hard disk. Unless another virtual machine is running that is using the same USB device, there is a very good chance that the light on the hard disk will light up like this:



At this point, you may also see a windows dialog box saying that a virtualbox USB device was detected and the driver needs to be loaded:


Double check before you congratulate yourself..


All right! So it looks like your device got accessed after all.

There are many ways to confirm if it really got detected by the virtual machine as a scsi device or not:

1) Check the fdisk -l command's output:

Click here for a bigger view



2) Check the contents of /proc/scsi/scsi file:

Click here for a bigger view



3) Check the content of /var/log/messages or the output of dmesg command:

Click here for a bigger view


A quick Recap...


Once again, if this does not work, just go back to the basics:

1) Power off the USB device (when applicable) and take it out

2) Reboot windows

3) Plug in the USB device

4) Make sure windows detects it

5) Make sure the virtual machine has the USB device enabled in its setup

6) Start up the virtual machine. Make sure no other virtual machine is using the same USB drive.

7) If you do not see the USB device, force it to be attached by right click on USB plug icon in bottom bar of virtualbox VM application window

Hope that helps.

Monday, April 21, 2008

Combining Openfiler and Virtualbox (Ubuntu guest OS on windows host)

Preface - Celebrating Openfiler


Ever since I came to know about openfiler, a free open source network attached storage appliance, I could not wait to get started on it! Compared to similar open source products in the internet like FreeNAS, openfiler has much better reviews from the user community.

In this article, we will talk about how we can leverage openfiler along with an Ubuntu virtual machine running on windows host. The fact that the virtual machine is running on windows is immaterial, as most of the material covered in this article will deal with making openfiler shared devices work with a unix distribution, namely Ubuntu, in our case. The steps discussed should not be largely different for any other unix operating system.

The ingredients


One of the main ingredients for doing this setup is to know about how to make host only networking work in Virtualbox. For this, I will recommend you to go through the article Virtualbox Case Study: Making host only networking work between two Ubuntu Guest OS (virtual machine) on Windows Vista host, that was posted just before this one. It has detailed steps as to how two virtual machines can be made to talk to each other, with internet working too.


So, going along with this idea, you need to do host only networking between a Unix virtual machine and an openfiler installation virtual machine. For installing openfiler, detailed how to graphical instructions are available at http://openfiler.com/learn/how-to/graphical-installation. If you are a text person, consider going through the text instructions at http://openfiler.com/learn/how-to/text-based-installation.

The other important ingredient that we need is an external USB hard disk to attach with the openfiler virtual machine like this:

click here for a bigger picture


I discovered that after the openfiler virtual machine booted up, windows vista premium home edition was not able to discover the USB hard disk in explorer. After I shutdown openfiler virtual machine, windows would detect the USB external drive.

windows-discovers-the-usb-drive



I also noticed was that if I specified the USB mouse device in the virtualbox USB devices setup of the unix machine, it was not able to access it! (The touchpad was still working though). So eventually, I just unchecked the USB mouse device for Ubuntu virtual machine and everything was ok.

The Openfiler Virtualbox VM setup example

Click here to see a bigger picture



The Ubuntu unix Virtualbox VM setup example

Click here to see a bigger picture



After the openfiler installation, you can see that it installs a linux 2.6.x OS, which can be brought up like any other linux installation.



Once you boot up the operating system, you can invoke the web administration GUI tool using http://<IP of openfiler server>:446 like this (openfiler/password is the default login):

Click here for a bigger picture


A Caveat if you do not want to use the entire external USB drive as network attached storage


Since I could not afford to dedicate my entire 300GB USB external drive for this experiment, I had to find a way of working with a part of it. Thankfully, with the help of gparted live CD, I was able to resize the FAT32/NTFS windows partition in external drive to 175G and created another partition of 125G with the remaining space. If you have never used gparted (GNOME partition editor), I must tell you that you simply have to try it. It is free, its versatile and its simply amazing.
Please note here that I DID NOT format the 125G partition as ext3 or any other format. This was important since the openfiler was not able to see the second physical volume in the USB drive otherwise as a iSCSI device otherwise.

Once this was done, I found that the openfiler administrator GUI was having a hard time creating a physical volume and a volume group based on the second partition carved in USB drive.
It seems that the GUI assumes that the entire attached drive should be available for its manipulations.

To get around this problem, I had to use the command line interface commands, which was good in a way, as I was able to learn many useful commands that are employed by the pretty front end. Using fdisk -l, it could be seen that the USB external drive was discovered as /dev/sda by openfiler OS.
# pvcreate /dev/sda2
Physical volume "/dev/sda2" successfully created

Click here for the bigger picture


Let it be understood that after any changes on openfiler, it is required to restart the iSCSI service. This can be done either by "# service iscsi-target restart" command or from the GUI: Services->Enable/Disable (Disable and Enable the iSCSI service forcibly to achieve the same result).

# vgcreate openfiler /dev/sda2
Volume group "vg" successfully created

Click here for a bigger picture (note that the Web UI
does not show the PVs yet, but its still OK)



After this, we need to create the logical volumes. Another weird thing I noticed was that if I used the lvcreate commane line interface to create logical volumes, they were not showing up in the web admin utility, even after restarting the iSCSI server on openfiler OS.

So I created three of them from the web administration utility (Hey, whatever works):

/dev/openfiler/ocr
/dev/openfiler/vote
/dev/openfiler/asm

This is how the web GUI showed them now:

Click here for a bigger view



The openfiler:/etc/ietd.conf file now has contents like this:


Defining Local networks


Be informed that another variation here is to define a local network consisting of IP subnet, which would essentially decide which machines can be allowed to discover the LUNs in network attached storage. Using the GUI, this is achieved through General->Local Networks.



Subsequently, you need can control allow/deny access to each logical volume from the GUI by updating the property of respective logical volumes (be aware that this creates /etc/initiators.allow and /etc/initiators.deny files in the openfiler OS):

Click here for a bigger view


I did not use any local networks, as I got burnt by giving an incorrect subnet and that was causing the discovery of LUNs to not work from ubuntu VM. So I just kept it plain and simple by allowing any machine in the LAN to discover the LUNs.

CHAP authentication


There is also something called CHAP authentication (the username and passwords can be set using the iscsiadm command) to further qualify iscsi initiator/target LUN discovery. This is a topic that I have not explored fully at this point, so I did not enable incoming/outgoing CHAP authentication either in openfiler OS (iSCSI target) or the ubuntu virtual machine (iSCSI initiator).


This was another aspect of open-iscsi that burnt me while trying to discover target LUNs from ubuntu VM, so I just steered clear of it, for the time being.




There is a fantastic How to on how to configure open-iscsi using CHAP authentication at http://en.opensuse.org/Open-iSCSI_and_SUSE_Linux. I would strongly recommend you to read and digest it.



Understanding how LUN discovery works with open-iscsi


open-iscsi is a robust, well performing implementation of iSCSI device package that is very much in vogue and is being adopted by various unix flavours.

It runs a daemon in the background called iscsid. Open-iSCSI utility keeps a persistent configuration of target LUNs and initiator nodes in a database. The iscsiadm utility is a command-line tool to manage (update, delete, insert, query) the persistent database.

The database contains two tables:

- Discovery table (/etc/iscsi/send_targets);
- Node table (/etc/iscsi/nodes).

You can install the open-iscsi package using either Synaptic package manager or "sudo apt-get install open-iscsi" command in Ubuntu. For my case, the apt-get command was somehow not able to to refer to ubuntu repositories, but thankfully Synaptic package manager worked fine.

open-iscsi works on a client server model. Initiators like the Ubuntu virtual machine will send discovery requests to SCSI targets and access them by creating login sessions. Till the duration of the session, the initiator can access the discovered targets (LUNs). Simply put, that is it.

This can be made a little complicated by adding CHAP authentication into the mix.

Each Initiator has a unique name, which can be got by checking the contents of /etc/iscsi/initiatorname.iscsi:
gverma@gverma-laptop:~$ sudo more /etc/iscsi/initiatorname.iscsi
## DO NOT EDIT OR REMOVE THIS FILE!
## If you remove this file, the iSCSI daemon will not start.
## If you change the InitiatorName, existing access control lists
## may reject this initiator. The InitiatorName must be unique
## for each iSCSI initiator. Do NOT duplicate iSCSI InitiatorNames.
InitiatorName=iqn.1993-08.org.debian:01:8211251a31ff


Making iscsi work


After installation of open-iscsi, I made sure that /etc/iscsi/iscsid.conf had the defaults configured, without CHAP authentication. The defaults for the rest are usually ok, which is what I went along with. After any changes, its important to restart open-iscsi service:

# sudo /etc/init.d/open-iscsi restart

I also disabled CHAP authentication (both incoming and outgoing users) for each logical volume from the openfiler GUI administrator utility.
The IP of openfiler VM was 192.168.0.6 and that Ubuntu VM was 192.168.0.4

Some initial problems..


gverma@gverma-laptop:~$ sudo iscsiadm -m discovery -t
st -p 192.168.0.6
iscsiadm: Login failed to authenticate with target

iscsiadm: discovery login to 192.168.0.6 rejected:
initiator error (02/01), non-retryable, giving up

To debug this, we can use the -d switch:
gverma@gverma-laptop:~$ sudo iscsiadm -m discovery -d -t st -p 192.168.0.6

discovery.startup = manual
discovery.type = sendtargets
discovery.sendtargets.address = 192.168.0.6
discovery.sendtargets.port = 3260
discovery.sendtargets.auth.authmethod = None
discovery.sendtargets.auth.username = gverma
discovery.sendtargets.auth.password = ********

discovery.sendtargets.auth.username_in = <empty>
discovery.sendtargets.auth.password_in = <empty>

discovery.sendtargets.timeo.login_timeout = 15
discovery.sendtargets.reopen_max = 5
discovery.sendtargets.timeo.auth_timeout = 45
discovery.sendtargets.timeo.active_timeout = 30
discovery.sendtargets.timeo.idle_timeout = 60
discovery.sendtargets.iscsi.MaxRecvDataSegmentLength = 32768

It seemed that some sort of authentication was still being used. To overcome this issue, I commented CHAP authentication for all discovery modes in /etc/iscsi/iscsid.conf and restarted open-iscsi service.

Then we needed to discover the targets on 192.168.0.6 (openfiler VM):
gverma@gverma-laptop:~$ sudo iscsiadm -m discovery
-t st -p 192.168.0.6
192.168.0.6:3260,1 iqn.2006-01.com.openfiler:openfiler.asm
192.168.0.6:3260,1 iqn.2006-01.com.openfiler:openfiler.ocr
192.168.0.6:3260,1 iqn.2006-01.com.openfiler:openfiler.vote

gverma@gverma-laptop:~$ sudo iscsiadm -m discovery
192.168.0.6:3260 via sendtargets

gverma@gverma-laptop:~$ sudo iscsiadm -m node
192.168.0.6:3260,1 iqn.2006-01.com.openfiler:openfiler.ocr
192.168.0.6:3260,1 iqn.2006-01.com.openfiler:openfiler.vote
192.168.0.6:3260,1 iqn.2006-01.com.openfiler:openfiler.asm

Now that the targets from Ubuntu node were discovered, it was needed to login to each of them:
gverma@gverma-laptop:~$ sudo iscsiadm -m node
-T iqn.2006-01.com.openfiler:openfiler.ocr -p 192.168.0.6 -l

Login session [iface: default, target: iqn.2006-01.com.openfiler:openfiler.ocr,
portal: 192.168.0.6,3260]

iscsiadm: initiator reported error (5 - encountered iSCSI login failure)
iscsiadm: Could not execute operation on all records. Err 107.

This error was because I had setup a local network for qualifying initiators to connect to LUNs and has basically given a wrong subnet in the openfiler setup. When I removed the local network from openfiler setup, removed openfiler:/etc/initiators.allow and /etc/initiators.deny files, and restarted ietd service, the command went through.

Now, we can also setup the target LUNs to startup/attach automatically on iscsi service restart on initiator (this will make the LUNs visible from the iscsi initiator by creating a session to the iscsi target). This can be done by influencing the node.startup property value to automatic in the iscsid database:
gverma@gverma-laptop:~$ sudo iscsiadm -m node \
-T iqn.2006-01.com.openfiler:openfiler.ocr -p 192.168.0.6 \
--op update -n node.startup -v automatic
gverma@gverma-laptop:~$ sudo iscsiadm -m node \
-T iqn.2006-01.com.openfiler:openfiler.vote -p 192.168.0.6 \
--op update -n node.startup -v automatic
gverma@gverma-laptop:~$ sudo iscsiadm -m node \
-T iqn.2006-01.com.openfiler:openfiler.asm -p 192.168.0.6 \
--op update -n node.startup -v automatic

Now, when we restart the open-iscsi service, it can be seen that the targets LUNs are attaching to the initiator:
gverma@gverma-laptop:~$ sudo /etc/init.d/open-iscsi restart
* Disconnecting iSCSI targets [ OK ]
* Stopping iSCSI initiator service [ OK ]
* Starting iSCSI initiator service iscsid [ OK ]
* Setting up iSCSI targets
Login session [iface: default, target: iqn.2006-01.com.openfiler:openfiler.ocr,
portal: 192.168.0.6,3260]
Login session [iface: default, target: iqn.2006-01.com.openfiler:openfiler.vote,
portal: 192.168.0.6,3260]
Login session [iface: default, target: iqn.2006-01.com.openfiler:openfiler.asm,
portal: 192.168.0.6,3260] [ OK ]

You can verify the login sessions with this command:
gverma@gverma-laptop:~$ sudo iscsiadm -m session
tcp: [4] 192.168.0.6:3260,1 iqn.2006-01.com.openfiler:openfiler.ocr
tcp: [5] 192.168.0.6:3260,1 iqn.2006-01.com.openfiler:openfiler.vote
tcp: [6] 192.168.0.6:3260,1 iqn.2006-01.com.openfiler:openfiler.asm

If you logout, the sessions wont be visible anymore:
gverma@gverma-laptop:~$ sudo iscsiadm -m node \
-T iqn.2006-01.com.openfiler:openfiler.asm -p 192.168.0.6 --logout
Logout session [sid: 1, target: iqn.2006-01.com.openfiler:openfiler.ocr,
portal: 192.168.0.6,3260]
Logout session [sid: 2, target: iqn.2006-01.com.openfiler:openfiler.vote,
portal: 192.168.0.6,3260]
Logout session [sid: 3, target: iqn.2006-01.com.openfiler:openfiler.asm,
portal: 192.168.0.6,3260]
gverma@gverma-laptop:~$ sudo iscsiadm -m session
iscsiadm: No active sessions.

You can login back again now:
gverma@gverma-laptop:~$ sudo iscsiadm -m node \
-T iqn.2006-01.com.openfiler:openfiler.asm -p 192.168.0.6 --login
Login session [iface: default, target: iqn.2006-01.com.openfiler:openfiler.ocr,
portal: 192.168.0.6,3260]
Login session [iface: default, target: iqn.2006-01.com.openfiler:openfiler.vote,
portal: 192.168.0.6,3260]
Login session [iface: default, target: iqn.2006-01.com.openfiler:openfiler.asm,
portal: 192.168.0.6,3260]
gverma@gverma-laptop:~$ sudo iscsiadm -m session
tcp: [4] 192.168.0.6:3260,1 iqn.2006-01.com.openfiler:openfiler.ocr
tcp: [5] 192.168.0.6:3260,1 iqn.2006-01.com.openfiler:openfiler.vote
tcp: [6] 192.168.0.6:3260,1 iqn.2006-01.com.openfiler:openfiler.asm

This is how the current mapping of the attached SAN LUNs and device name is got for the current session:
gverma@gverma-laptop:~$ ls -l /dev/disk/by-path
lrwxrwxrwx 1 root root 9 2008-04-16 22:02 ..
ip-192.168.0.6:3260-iscsi-iqn.2006-01.com.openfiler:openfiler.asm-lun-0 -> ../../sdd

lrwxrwxrwx 1 root root 9 2008-04-16 22:02 ..
ip-192.168.0.6:3260-iscsi-iqn.2006-01.com.openfiler:openfiler.ocr-lun-0 -> ../../sdb
lrwxrwxrwx 1 root root 9 2008-04-16 22:02 ..
ip-192.168.0.6:3260-iscsi-iqn.2006-01.com.openfiler:openfiler.vote-lun-0 -> ../../sdc
lrwxrwxrwx 1 root root 9 2008-04-16 12:48 pci-0000:00:01.1-scsi-0:0:0:0 -> ../../sda
lrwxrwxrwx 1 root root 10 2008-04-16 12:48 pci-0000:00:01.1-scsi-0:0:0:0-part1 -> ../../sda1
lrwxrwxrwx 1 root root 10 2008-04-16 12:48 pci-0000:00:01.1-scsi-0:0:0:0-part2 -> ../../sda2
lrwxrwxrwx 1 root root 10 2008-04-16 12:48 pci-0000:00:01.1-scsi-0:0:0:0-part5 -> ../../sda5
lrwxrwxrwx 1 root root 10 2008-04-16 12:48 pci-0000:00:01.1-scsi-1:0:0:0 -> ../../scd0

Paydirt: Seeing the LUNs on Ubuntu VM


This could be seen in the /var/log/messages:


Apr 16 21:53:19 gverma-laptop kernel: [18263.996005] scsi 42:0:0:0: Direct-Access Openfile Virtual disk 0 PQ: 0 ANSI: 4
Apr 16 21:53:19 gverma-laptop kernel: [18264.000393] sd 42:0:0:0: [sdb] 2097152 512-byte hardware sectors (1074 MB)
Apr 16 21:53:19 gverma-laptop kernel: [18264.002548] sd 42:0:0:0: [sdb] Write Protect is off
Apr 16 21:53:19 gverma-laptop kernel: [18264.004196] sd 42:0:0:0: [sdb] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA
Apr 16 21:53:19 gverma-laptop kernel: [18264.004196] sd 42:0:0:0: [sdb] 2097152 512-byte hardware sectors (1074 MB)
Apr 16 21:53:19 gverma-laptop kernel: [18264.004196] sd 42:0:0:0: [sdb] Write Protect is off
Apr 16 21:53:19 gverma-laptop kernel: [18264.009630] sd 42:0:0:0: [sdb] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA
Apr 16 21:53:19 gverma-laptop kernel: [18264.009718] sdb: unknown partition table
Apr 16 21:53:19 gverma-laptop kernel: [18264.030856] sd 42:0:0:0: [sdb] Attached SCSI disk
Apr 16 21:53:19 gverma-laptop kernel: [18264.030974] sd 42:0:0:0: Attached scsi generic sg2 type 0
Apr 16 21:53:19 gverma-laptop kernel: [18264.292677] scsi43 : iSCSI Initiator over TCP/IP
Apr 16 21:53:19 gverma-laptop kernel: [18264.553516] scsi 43:0:0:0: Direct-Access Openfile Virtual disk 0 PQ: 0 ANSI: 4
Apr 16 21:53:19 gverma-laptop kernel: [18264.555463] sd 43:0:0:0: [sdc] 2097152 512-byte hardware sectors (1074 MB)
Apr 16 21:53:19 gverma-laptop kernel: [18264.556540] sd 43:0:0:0: [sdc] Write Protect is off
Apr 16 21:53:19 gverma-laptop kernel: [18264.559456] sd 43:0:0:0: [sdc] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA
Apr 16 21:53:19 gverma-laptop kernel: [18264.561681] sd 43:0:0:0: [sdc] 2097152 512-byte hardware sectors (1074 MB)
Apr 16 21:53:19 gverma-laptop kernel: [18264.564112] sd 43:0:0:0: [sdc] Write Protect is off
Apr 16 21:53:19 gverma-laptop kernel: [18264.566617] sd 43:0:0:0: [sdc] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA
Apr 16 21:53:19 gverma-laptop kernel: [18264.566719] sdc: unknown partition table
Apr 16 21:53:19 gverma-laptop kernel: [18264.585774] sd 43:0:0:0: [sdc] Attached SCSI disk
Apr 16 21:53:19 gverma-laptop kernel: [18264.585872] sd 43:0:0:0: Attached scsi generic sg3 type 0
Apr 16 21:53:20 gverma-laptop kernel: [18264.847763] scsi44 : iSCSI Initiator over TCP/IP
Apr 16 21:53:20 gverma-laptop kernel: [18265.112437] scsi 44:0:0:0: Direct-Access Openfile Virtual disk 0 PQ: 0 ANSI: 4
Apr 16 21:53:20 gverma-laptop kernel: [18265.112437] sd 44:0:0:0: [sdd] 125829120 512-byte hardware sectors (64425 MB)
Apr 16 21:53:20 gverma-laptop kernel: [18265.112437] sd 44:0:0:0: [sdd] Write Protect is off
Apr 16 21:53:20 gverma-laptop kernel: [18265.113012] sd 44:0:0:0: [sdd] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA
Apr 16 21:53:20 gverma-laptop kernel: [18265.115823] sd 44:0:0:0: [sdd] 125829120 512-byte hardware sectors (64425 MB)
Apr 16 21:53:20 gverma-laptop kernel: [18265.117126] sd 44:0:0:0: [sdd] Write Protect is off
Apr 16 21:53:20 gverma-laptop kernel: [18265.119686] sd 44:0:0:0: [sdd] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA
Apr 16 21:53:20 gverma-laptop kernel: [18265.119786] sdd: unknown partition table
Apr 16 21:53:20 gverma-laptop kernel: [18265.134147] sd 44:0:0:0: [sdd] Attached SCSI disk
Apr 16 21:53:20 gverma-laptop kernel: [18265.134235] sd 44:0:0:0: Attached scsi generic sg4 type 0

The SAN devices were visible on Ubuntu VM now:


gverma@gverma-laptop:~$ sudo fdisk -l
Disk /dev/sda: 5906 MB, 5906628608 bytes
255 heads, 63 sectors/track, 718 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x000eb831
Device Boot Start End Blocks Id System
/dev/sda1 * 1 680 5462068+ 83 Linux
/dev/sda2 681 718 305235 5 Extended
/dev/sda5 681 718 305203+ 82 Linux swap / Solaris
Disk /dev/sdb: 1073 MB, 1073741824 bytes
34 heads, 61 sectors/track, 1011 cylinders
Units = cylinders of 2074 * 512 = 1061888 bytes
Disk identifier: 0x00000000
Disk /dev/sdb doesn't contain a valid partition table
Disk /dev/sdc: 1073 MB, 1073741824 bytes
34 heads, 61 sectors/track, 1011 cylinders
Units = cylinders of 2074 * 512 = 1061888 bytes
Disk identifier: 0x00000000
Disk /dev/sdc doesn't contain a valid partition table
Disk /dev/sdd: 64.4 GB, 64424509440 bytes
64 heads, 32 sectors/track, 61440 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Disk identifier: 0x00000000
Disk /dev/sdd doesn't contain a valid partition table

Gotcha: Logical Volumes lost after reboot of openfiler


One common caveat that I noticed was that the logical volumes were lost after a reboot of the openfiler virtual machine. The issue seems to be related to detection of USB devices while bringing up the linux OS.

Anyways, to get around it, I did the following each time (better put this in /etc/rc.local)
# pvscan
# vgscan
# lvscan

Activate the logical volumes, otherwise, although sudo /etc/init.d/open-iscsi restart will show that initiator discovers target LUNs, BUT they will not show up in fdisk -l and when you do lvdisplay on openfiler machine, the status will show as NOT available.

# lvchange -ay openfiler/asm
# lvchange -ay openfiler/ocr
# lvchange -ay openfiler/vote
# pvscan
# vgscan
# lvscan

You can verify the status of the logical/physical volumes and volume groups now:
# lvdisplay
# pvdisplay
# vgdisplay

Pending Areas to explore



  • One of the pending topics to explore is unique device labeling using udev. This will prevent LUN name renaming from /dev/sdb to /dev/sdd all of a sudden, should you happen to add a new Logical volume in the SAN or restart the open-iscsi server on initiator.


I tried setting it up using some other examples on the internet, but it did not quite work out. It also seems that devlabel is passe' and udev is favored in most unix distributions.


  • Mounting the SAN device as a filesystem on initiator machine.


Conclusion


Well, its not as if I covered the whole nine yards, but it was a start and I hope to complete the remaining topics soon. When I do so, I will cover that in more details either in this article or in a separate one.

In the meanwhile, if you have any feedback, feel free to leave a message or email me at gaurav _ verma 22 at yahoo DOT com.

Friday, April 18, 2008

Virtualbox Case Study: Making host only networking work between two Ubuntu Guest OS (virtual machine) on Windows Vista host

Preface


In the past articles, I have talked about how to make internal networking work between two virtual machines built using Virtualbox. In this article, we will see seeing a configuration of host only networking between two virtual machines built on virtualbox.

The advantages of configuring host only networking are :


1) Internet works (yes!)

2) You get an IP on the LAN for each virtual machine. Yes, this is really possible. So this means that you can have 2 more real LAN IPs coming out of a single windows desktop/laptop in the LAN. Isn't that amazing?

3) Due to 2), your host machine can also ping or access your virtual machine

4) Each virtual machine can access the other virtual machine. E.g. ping/ssh/telnet into it.

For most purposes, this kind of setup should be optimal or sufficient. Hey, if you can access your machines in the LAN and internet works from them, that should be good enough. And, its for free.

Here is a bird's eye overview of the end setup..


click here to enlarge this diagram


The Virtualbox Network Adapter configuration


We start with the Virtualbox Network Adapter configuration. We create a host only network interface on each virtual machine definition.

For Virtual machine1 - Gutsy:



For Virtual machine2 - gutsy2:



Please note here that while this article primarily deals with setting up host only networking, in the previous article, I had setup internal networking too, so these VMs also have a Virtual Host Interface 2 for internal networking within themselves.

Install the Guest Operating system on the virtual machines


Now, do the installation of the guest operating system in the virtual machine and setup the two network interfaces like this.
Note here that I chose static IP 192.168.0.5 because my windows machine (host OS) had the IP 192.168.0.3 and the IP had to be in the 192.168.0.x subnet. My windows host was behind a Netgear router.


We need to make sure we choose non-conflicting IPs for the host only interface otherwise you will get an IP conflict in your LAN domain. For the other machine, the eth0 static IP was 192.168.0.4.

An interesting fact here was that if I chose DHCP for eth0 in the VM, i was getting an IP of 192.168.0.3, the same as that of my windows host!! This is another reason why I had to choose static IP address.


For eth1, i chose a random IP in the subnet 192.168.2.x for no particular reason. The eth1 IP of the other virtual machine was 192.168.2.1



Make sure to restart the network after doing this network setup to bring it into effect.

Do not bother about the SIOCADDRT: No such process error:
gverma@gverma-laptop:~$ sudo /etc/init.d/networking restart
[sudo] password for gverma:
* Reconfiguring network interfaces...
SIOCADDRT: No such process
Failed to bring up eth1.
[ OK ]

For the benefit of the reader, the DNS and host information is also reproduced here, although the only thing of significance should be the DNS and default gateway, both of which were set to be the same as that on windows host:



This is how the network configuration on the other virtual machine looked like:



For those who are interested, here is the output of the route command:



gverma@gverma-laptop:~$ route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
192.168.2.0 * 255.255.255.0 U 0 0 0 eth1
192.168.0.0 * 255.255.255.0 U 0 0 0 eth0
link-local * 255.255.0.0 U 1000 0 0 eth0
default 192.168.0.1 0.0.0.0 UG 100 0 0 eth0

A quick side note for Windows Vista


Here is a quick side note for Vista or windows in general. It is preferable to disable the UAC (user access control) setting and also disable the firewall on the host windows machine.



Disable the User Access Control (UAC) feature on vista to make your life easier:


Network bridging: the key to make host only networking work


The KEY to make host only networking work on windows is to bridge the real working network interface (either wireless connection of a hard wired ethernet) with the virtual adapter network interface. You can do this by selecting two network interfaces, right click, choose bridge and voila, there you have it.
A network bridge is nothing but a simplified concept of joining two connections into one. Its like joining two rivers into a bigger river. When water will flow into the bigger river, it will flow into both the rivers. At least, this is how I understand it.

ALSO, we have to enable promiscuous packet routing mode in ALL the member network interfaces that are in the bridge. This is an important setup to make it work.



Initially, I had struggled a lot to make host only networking work by enabling promiscuous for only the Virtualbox Adapter in the bridge, but was not successful. That is when I read a cryptic posting on the Virtualbox forum saying that it has to be done for all the member bridge interfaces; and thats when it worked.




A note of caution


When we add an adapter to the bridge, the connection of the main wireless network is lost for a moment, but it is re-enabled back. This is also covered in the user manual in secion 6.3 (Virtualbox version 1.5.6)



Warning: Setting up Host Interface Networking requires changes to your
host’s network configuration, which will cause the host to lose its network
connection.
Do not change network settings on remote or production systems
unless you know what you are doing.



Later on, I assigned Virtualbox Host Interface 2 (on windows) to NAT0 (eth0) of Virtual machine 2 and bridged it to the wireless network connection too:



After the bridging, this is how the ipconfig output looked like (the IP of the windows host is 192.168.0.3):


ipconfig-output-there-is-a-new-ip-in-the-lan-now


Now, we were one BIG, HAPPY Family. Make sure the connection status shows as connected.



Testing the connections - Internet works!


At this point, the internet was working from the virtual machines:


Another real quick check is to use the wget utility:



gverma@gverma-laptop:~$ wget yahoo.com
--18:32:56-- http://yahoo.com/
=> `index.html'
Resolving yahoo.com... 216.109.112.135, 66.94.234.13
Connecting to yahoo.com|216.109.112.135|:80... connected.
HTTP request sent, awaiting response... 301 Moved Permanently
Location: http://www.yahoo.com/ [following]
--18:32:57-- http://www.yahoo.com/
=> `index.html'
Resolving www.yahoo.com... 69.147.114.210
Connecting to www.yahoo.com|69.147.114.210|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 9,490 (9.3K) [text/html]

100%[=====================================================>] 9,490 --.--K/s

18:32:57 (258.06 KB/s) - `index.html' saved [9490/9490]

Testing the connections - Host only networking


Now comes the acid test of whether the virtual machines and hosts can see each other.

Windows could see the individual virtual machines:
C:\Users\gaurav> ping 192.168.0.4
Pinging 192.168.0.4 with 32 bytes of data:

Reply from 192.168.0.4: bytes=32 time<1ms TTL=64
Reply from 192.168.0.4: bytes=32 time<1ms TTL=64
Reply from 192.168.0.4: bytes=32 time<1ms TTL=64
Reply from 192.168.0.4: bytes=32 time<1ms TTL=64

Ping statistics for 192.168.0.4:
Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 0ms, Maximum = 0ms, Average = 0ms

C:\Users\gaurav> ping 192.168.0.5
Pinging 192.168.0.5 with 32 bytes of data:

Reply from 192.168.0.5: bytes=32 time<1ms TTL=128
Reply from 192.168.0.5: bytes=32 time<1ms TTL=128
Reply from 192.168.0.5: bytes=32 time<1ms TTL=128
Reply from 192.168.0.5: bytes=32 time<1ms TTL=128

Ping statistics for 192.168.0.5:
Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 0ms, Maximum = 0ms, Average = 0ms

Pinging the windows host IP from inside the VMs worked beautifully:



And, pinging to the other virtual machine from each VM worked beautifully too:

ping-vm1-and-vm2-from-a-vm

Taking this a step further, since ssh was enabled on both the virtual machines, I was able to login to the individual virtual machines also:
gverma@gverma-laptop:~$ ps -ef | grep ssh
gverma 4684 4643 0 21:27 ? 00:00:00 /usr/bin/ssh-agent x-session-manager
root 5422 1 0 22:20 ? 00:00:00 /usr/sbin/sshd
gverma 5456 4864 0 22:23 pts/0 00:00:00 grep ssh
gverma@gverma-desktop:~$ ssh 192.168.0.5

gverma@192.168.0.5's password:
Linux gverma-desktop 2.6.22-14-generic #1 SMP Sun Oct 14 23:05:12 GMT 2007 i686

The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.
Last login: Mon Apr 14 22:20:55 2008 from gverma-laptop.local
gverma@gverma-desktop:~$

gverma@gverma-desktop:~$ ssh 192.168.0.4
The authenticity of host '192.168.0.4 (192.168.0.4)' can't be established.
RSA key fingerprint is a8:29:91:97:7d:99:37:6e:31:f1:06:ec:04:39:78:d7.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.0.4' (RSA) to the list of known hosts.

gverma@192.168.0.4's password:

Linux gverma-laptop 2.6.22-14-generic #1 SMP Sun Oct 14 23:05:12 GMT 2007 i686

The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.
Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.

Private networks


To make things better, the private networks were working too (thanks to the good karma accumulated in the previous article - Case study: Configuring Internal networking work for talking two linux guest OS (Ubuntu) on windows vista host ) :
gverma@gverma-laptop:~$ ping 192.168.2.1
PING 192.168.2.1 (192.168.2.1) 56(84) bytes of data.
64 bytes from 192.168.2.1: icmp_seq=1 ttl=64 time=0.200 ms
64 bytes from 192.168.2.1: icmp_seq=2 ttl=64 time=1.16 ms
64 bytes from 192.168.2.1: icmp_seq=3 ttl=64 time=0.000 ms
64 bytes from 192.168.2.1: icmp_seq=4 ttl=64 time=0.073 ms
--- 192.168.2.1 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3006ms
rtt min/avg/max/mdev = 0.000/0.358/1.160/0.468 ms

gverma@gverma-laptop:~$ ping 192.168.2.2
PING 192.168.2.2 (192.168.2.2) 56(84) bytes of data.
64 bytes from 192.168.2.2: icmp_seq=1 ttl=64 time=4.49 ms
64 bytes from 192.168.2.2: icmp_seq=2 ttl=64 time=0.368 ms
64 bytes from 192.168.2.2: icmp_seq=3 ttl=64 time=1.65 ms
64 bytes from 192.168.2.2: icmp_seq=4 ttl=64 time=3.15 ms
64 bytes from 192.168.2.2: icmp_seq=5 ttl=64 time=1.51 ms

A moment of Triumph


So here we are. Let us take a moment to sit back, relax and let reality sink it. It is really working! I am reminded of the umpteen times I had gone to forums.virtualbox.org to find the answer to this riddle and here it is, solved.

I hope this guide is of use to someone else who is trying to make a similar configuration work or just trying to understand -- what is it that we can achieve from host only networking??

Nothing could be sweeter, nothing.

Sunday, April 13, 2008

Case study: Making Internal networking for talking between two linux guest OS (Ubuntu) on windows vista host

Preface


For the past few days, I had been struggling with making internal networking work on two (or more) linux guest OSes on Windows host using virtualbox.

The advantage of this setup is we can setup an internal networking lab between two or more nods using regular windows, which can be used for a variety of purposes. My primary purpose behind this setup was to simulate an environment for implementing Oracle 10gR2 RAC cluster.

As we know, setting up 10g RAC on a node needs a public network and a private network. The public network is achieved right off the bat with the NAT network configuration in Virtualbox, but the internal network setup is one that is pretty tricky.
I will give credit to both http://www.virtualbox.org/wiki/Testing_Networks and http://blogs.sun.com/manoj/entry/netowkring_with_virtualbox for the ideas tested, but more so to http://www.virtualbox.org/wiki/Testing_Networks

That being said, I would daresay that the content authoring in Virtualbox user manual about setting up internal networking on windows host deserves a little more attention. I was able to find several posts on the internet, including a particular Virtualbox wiki page, that dealt with making host only networking or internal networking work on Ubuntu host, but was not able to find the same for windows host.

Lets get to the setup now..


This is how the GUI setup looks like internal network (Adapter 1 - 2nd adapter) for the two virtual machines:




The caveat...


The first thing which we need to understand that is the Virtualbox GUI for setting up internal network is BROKEN!

You need to set it up using the command line interface (VboxManage command) Also, be aware that the internal network settings are reset by the GUI if any of the VM settings are changed, so you better do this AFTER you are done with all the other VM changes.

As per the User manual, the default internal network name is "intnet", but you can choose any other name. I chose the intnet for no particular reason.

This is what we need to do with the VboxManage command. (here, we assume that there are two guest OSes - "Gutsy" and "gutsy2"). Mind you, the virtual machine name is case-sensitive in the VboxManage command.

c:\Program Files\Innotek> VboxManage modifyvm Gutsy -nic2 intnet

c:\Program Files\Innotek> VboxManage modifyvm Gutsy -intnet2 intnet

c:\Program Files\Innotek> VboxManage modifyvm gutsy2 -nic2 intnet

c:\Program Files\Innotek> VboxManage modifyvm gutsy2 -intnet2 intnet


Default Gateway and DNS of eth1 interfaces - (used for private network as per our semantics)


If you already have the eth0 in the guest OS (linux) as NAT, it will automatically acquire a DHCP assigned IP in the subnet 10.0.2.x, and will have a default gateway of 10.0.2.2. The names server will be 10.0.2.3. With this setup, internet will work on the linux guest OS, (provided it is works on the windows host too).



We will need to make the default gateway of the private network (eth1) in guest OS (linux) as 10.0.2.2 too -- whichever is causing the internet to work.

Private IPs should be the same subnet...


Another thing to notice here is that all the guest OSes that are intended to be connected through internal network should have the private interface IP (eth1) in the SAME subnet. You can choose any private subnet like 192.168.x.x or 10.10.x.x:

Here are three valid combinations of eth1 IPs:

a) Gutsy could have 192.168.2.1, and gutsy2 could have 192.168.2.2


b) Gutsy could have 192.168.3.2, and gutsy2 could have 192.168.3.4


c) Gutsy could have 10.10.1.1, and gutsy2 could have 10.10.1.2



( I hope you get the idea).

For the sake of illustration, I chose this combination of private IPs:

  • 192.168.2.1 (for linux guest OS 1 - Gutsy)

  • 192.168.2.2 (for linux guest OS 1 - gutsy2)




Restart networking


After doing this, you need to restart the networking on the linux host. On Ubuntu, it is done with the sudo /etc/init.d/networking restart command:
gverma@gverma-desktop:~$ sudo /etc/init.d/networking restart
[sudo] password for gverma: ********
* Reconfiguring network interfaces...
SIOCADDRT: No such process
Failed to bring up eth1.
[ OK ]

Overview of the networking configuration


Here is how the ifconfig for Gutsy - linux guest OS 1 looks like (as you can see the eth0 IP has been acquired in 10.0.2.x subnet by the DHCP server):



And this is how it looks of Gutsy2 (as you can see, the eth0 IP has been acquired in 10.0.2.x subnet by the DHCP server):
By mistake, I have put the same image for the ifconfig output (that of Gutsy) for Gutsy2  as well. The hardware MAC address for eth1/0 should be different and so should be the the IP address for eth1 -- it should be 192.168.2.1. The eth0 IP for Gutsy2 would still be 10.0.2.x as it was of NAT type. -- Thanks, Gaurav



A variation...


Let us consider a variation here.

If the Virtual machines have only 1 virtualbox network adapter defined of type internal network, then the internal static IP within the Virtual machine would have needed to use a default gateway == host's IP default gateway.

For example, if the windows host had an IP of 192.168.0.4 (say), by virtue of being behind a router, then the default gateway of eth0 interface in in the virtual machine should be 192.168.0.1. (Again, this is assuming ONLY if you do not have a NAT type virtual interface defined for the VM.)



In my case, since I had two virtual network interfaces defined (one NAT and one internal networking type) for the VMs, I had to make the default gateway of the eth1 (private interface) of guest OS the same as that of eth0 -- which was 10.0.2.2.

The default gateway can be verified by the output of route command.


Interesting Trivia


In addition, the media state of the Virtual TAP Adapters in windows host were showing up as media disconnected, but still the internet was working for the Virtual machines AND the pings to each other were working. So, don't be fooled by the status of Virtulbox Adapters in the windows host.

gverma@gverma-desktop:~$ ping 192.168.2.1
PING 192.168.2.2 (192.168.2.2) 56(84) bytes of data.
64 bytes from 192.168.2.2: icmp_seq=1 ttl=64 time=0.031 ms
64 bytes from 192.168.2.2: icmp_seq=2 ttl=64 time=0.030 ms
64 bytes from 192.168.2.2: icmp_seq=3 ttl=64 time=0.109 ms
64 bytes from 192.168.2.2: icmp_seq=4 ttl=64 time=0.031 ms
64 bytes from 192.168.2.2: icmp_seq=5 ttl=64 time=0.036 ms
64 bytes from 192.168.2.2: icmp_seq=6 ttl=64 time=0.033 ms
64 bytes from 192.168.2.2: icmp_seq=7 ttl=64 time=0.110 ms

--- 192.168.2.2 ping statistics ---
7 packets transmitted, 7 received, 0% packet loss, time 5998ms
rtt min/avg/max/mdev = 0.030/0.054/0.110/0.035 ms

gverma@gverma-desktop:~$ ping 192.168.2.2
PING 192.168.2.1 (192.168.2.1) 56(84) bytes of data.
64 bytes from 192.168.2.1: icmp_seq=1 ttl=64 time=0.398 ms
64 bytes from 192.168.2.1: icmp_seq=2 ttl=64 time=0.304 ms
64 bytes from 192.168.2.1: icmp_seq=3 ttl=64 time=0.380 ms
64 bytes from 192.168.2.1: icmp_seq=4 ttl=64 time=7.20 ms
64 bytes from 192.168.2.1: icmp_seq=5 ttl=64 time=0.407 ms
64 bytes from 192.168.2.1: icmp_seq=6 ttl=64 time=0.272 ms
64 bytes from 192.168.2.1: icmp_seq=7 ttl=64 time=3.42 ms
64 bytes from 192.168.2.1: icmp_seq=8 ttl=64 time=0.411 ms
64 bytes from 192.168.2.1: icmp_seq=9 ttl=64 time=0.398 ms

--- 192.168.2.1 ping statistics ---
9 packets transmitted, 9 received, 0% packet loss, time 8004ms
rtt min/avg/max/mdev = 0.272/1.465/7.200/2.241 ms

Viola! The same result can be seen on gutsy2 too.

Well done!


Congratulations on setting up a private internal network between multiple virtual machines using Virtualbox. I hope this tutorial is of use to someone exploring the virtualbox tool. I personally feel it is a lot leaner and powerful, as compared to the vmware's size. It is also more robust performance wise.

If you feel this was helpful, send me a note at gaurav_verma two two [at] yahoo dot com or just leave me a comment.