Configuring iSCSI and ZFS pool on Sun Storage 7410

Sun Storage 7410 Unified Storage System Tech Spec

Create a iSCSI target

ISCSI target can be quickly created using the the storage web console.
To create a new iSCSI target go to the Configuration → SAN → iSCSI Targets
Click on the (+) icon to create a new iSCSI target.

Click on the OK after setting target properties. It should create a new iSCSI target as given below.

To share LUNs only via particular targets build Target Groups. To create a group or add to an existing one, drag the entity from the left to the table on the right. When done adding new target group click on the Apply to save the changes.

Create and Share a LUN as an iSCSI target
LUN can be quickly created using the the storage web console.
To create a new LUN go to the Shares → Projects → default → LUNs.
Click on the (+) icon and provide the LUN configuration.

Target group “targets-0” share this LUN as an ISCSI target.
When done click on the Apply to create a LUN.

Enable iSCSI Data Service
To provide LUN access via the iSCSI protocol make sure iSCSI Data Service is enabled.
Go to the Configuration → Services and If the service is not online, click the power icon and the service should come online .

Mounting iSCSI device on the Solaris client
Enable iSCSI initiator daemon

bash-3.00# svcadm enable svc:/network/iscsi/initiator:default
bash-3.00# svcs svc:/network/iscsi/initiator:default
STATE          STIME    FMRI
online         Aug_05   svc:/network/iscsi/initiator:default
bash-3.00#

bash-3.00# iscsiadm modify discovery --sendtargets enable

add Storage IP address as the discovery-address. You need to provide the appropriate iSCSI target server ip in place of 10.6.140.88

bash-3.00# iscsiadm add discovery-address 10.6.140.88

You should be seeing iSCSI target details discovered from the Storage (10.6.140.88)

bash-3.00# iscsiadm list target
Target: iqn.1986-03.com.sun:02:661c5ce0-9a48-4d29-dc88-be12e2937635
        Alias: ldom-target
        TPGT: 2
        ISID: 4000002a0000
        Connections: 1
bash-3.00#

Format and Label the disk.

bash-3.00# format
Searching for disks...done

c5t600144F09E0C3A8000004C62AE220003d0: configured with capacity of 499.91GB

AVAILABLE DISK SELECTIONS:
       0. c1t0d0
          /pci@400/pci@0/pci@8/scsi@0/sd@0,0
       1. c1t1d0
          /pci@400/pci@0/pci@8/scsi@0/sd@1,0
       2. c5t600144F09E0C3A8000004C62AE220003d0
          /scsi_vhci/ssd@g600144f09e0c3a8000004c62ae220003
Specify disk (enter its number): 2
selecting c5t600144F09E0C3A8000004C62AE220003d0
[disk formatted]
Disk not labeled.  Label it now? yes

Create a ZFS pool on the iSCSI disk.

bash-3.00# zpool create iscsi-ldompool c5t600144F09E0C3A8000004C62AE220003d0s6
bash-3.00# zpool list
NAME             SIZE   USED  AVAIL    CAP  HEALTH  ALTROOT
iscsi-ldompool   496G  76.5K   496G     0%  ONLINE  -
ldompool         136G  50.6G  85.4G    37%  ONLINE  -
bash-3.00#

Installing Logical Domains Over ZFS

LDom Manager 1.3 needs to be installed on the top of Solaris 10u8 to setup the virtualization system. See http://www.sun.com/servers/coolthreads/ldoms/get.jsp

Download LDoms_Manager-1_3.zip file. unzip it and run the install-ldm script from the unzip folder

# ./Install/install-ldm
# cd $DIR/LDoms_Manager-1_3

Verify the installation with executing following command.

# /opt/SUNWldm/bin/ldm list

Allocate systems resources to the primary (or control) domain

Creating the control domain with 8 vcpu’s and 4 GB RAM.

Note: If you have any cryptographic devices in the control domain, you cannot dynamically reconfigure CPUs. So if you are not using cryptographic devices, set-mau to 0.
Assign one cryptographic resource to the control domain, primary. This leaves the remainder of the cryptographic resources available to a guest domain.

# ldm set-mau 1 primary

Assigning 8 virtual CPUs and 4GB memory to the control domain, primary. This leaves the remainder of the virtual CPUs available to a guest domains.

# ldm set-vcpu 8 primary
# ldm set-memory 4G  primary

Making the modified configuration permanent using list-spconfig option.

# ldm list-spconfig
factory-default [current]

Adding a logical domain machine configuration to the system controller (SC).

# ldm add-spconfig initial
# ldm list-spconfig
factory-default [current]
initial [next]

Reboot the server to come up with the initial configuration.

# shutdown -i6 -g0 -y

Configure the virtual switch on the primary domain.

By default, networking between the control/service domain and other domains in the system is disabled. To enable this, the virtual switch device should be configured as a network device. The virtual switch can either replace the underlying physical device (nxge0 in this example) as the primary interface or be configured as an additional network interface in the domain.

Plumb the virtual switch (vsw0) on the primary domain

# ifconfig vsw0 plumb

Bring down the primary interface

# ifconfig nxge0 down unplumb

Configure virtual switch

# ifconfig vsw0 10.6.140.204 netmask 255.255.255.0 broadcast + up

Modify the hostname file to make this configuration permanent

# mv /etc/hostname.nxge0 /etc/hostname.vsw0

Enable virtual network terminal server daemon

# svcadm enable vntsd
# svcs vntsd
STATE          STIME    FMRI
online         Jun_17   svc:/ldoms/vntsd:default

Create a template LDOM (Oracle VM for SPARC virtual machine)

Create a ZFS file system that will be used to create virtual disks for VMs. We can use either a local disk or iSCSI target created on the storage for creating ZFS over it.
Create a ZFS pool on the localdisk.

# zpool create ldompool c1t1d0
# zfs list
NAME       USED  AVAIL  REFER  MOUNTPOINT
ldompool    72K   134G    21K  /ldompool
# zpool list
NAME       SIZE   USED  AVAIL    CAP  HEALTH  ALTROOT
ldompool   136G  76.5K   136G     0%  ONLINE  -

Creating a 25GB ZFS volume for the first disk.

# zfs create -V 25g ldompool/disk1
# zfs list
NAME             USED  AVAIL  REFER  MOUNTPOINT
ldompool        25.0G   109G    21K  /ldompool
ldompool/disk1    25G   134G    16K  -

Specify the device to be exported by the virtual disk server as a virtual disk to the guest domain.

# ldm add-vdiskserverdevice /dev/zvol/dsk/ldompool/disk1 vol1@primary-vds0

Creating a guest domain ld01 with 30 VPUs and 7GB Memory.

# ldm add-domain ld01
# ldm set-vcpu 30 ld01
# ldm set-memory 7G ld01

Add a virtual network device vnet1 to the guest domain ld01

# ldm add-vnet vnet1 primary-vsw0 ld01
# ldm add-vdisk vdisk1 vol1@primary-vds0 ld01

Adding the ISO image as a virtual device that will be used as the installation media.

# ldm add-vdiskserverdevice /installdvd/sol-10-u8-ga-sparc-dvd.iso iso@primary-vds0
# ldm add-vdisk iso iso@primary-vds0 ld01

Set auto-boot and boot-device variables for the guest domain.

# ldm set-variable auto-boot\?=false ld01
# ldm bind-domain ld01
# ldm start-domain ld01

Connect to the guest domain console from control domain and start the installation.

# telnet localhost 5000
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
Connecting to console "ldom1" in group "ldom1" ....
Press ~? for control options ..
{0} ok show-disks
a) /virtual-devices@100/channel-devices@200/disk@1
b) /virtual-devices@100/channel-devices@200/disk@0
q) NO SELECTION
Enter Selection, q to quit: a
/virtual-devices@100/channel-devices@200/disk@1 has been selected.
Type ^Y ( Control-Y ) to insert it in the command line. e.g. ok nvalias mydev ^Y
for creating devalias mydev for
/virtual-devices@100/channel-devices@200/disk@1

{0} ok devalias
iso /virtual-devices@100/channel-devices@200/disk@1
vdisk1 /virtual-devices@100/channel-devices@200/disk@0
vnet0 /virtual-devices@100/channel-devices@200/network@0
net /virtual-devices@100/channel-devices@200/network@0 disk /virtual-devices@100/channel-devices@200/disk@0 virtual-console /virtual-devices/console@1
name aliases

Now boot from the virtual iso image appending the :f (this is to specify the slice 6 of the DVD/ISO image). This can also be done using boot iso:f

{0} ok boot /virtual-devices@100/channel-devices@200/disk@1:f

Follow on screen instructions to complete rest of the solaris installation.

As we have used ZFS pool, New LDOMs can be easily created with ZFS snapshot.
Execute sys-unconfig on ld01. This will halt the guest domain ld01 and allow a snapshot of the base ldom’s disk to be taken.

# sys-unconfig

stop ld01 domain

# ldm stop-domain ld01-db01

Remove the ISO disk from the guest domain, and execute the snap shot of the disk image

# ldm remove-vdisk iso ld01
# zfs snapshot ldompool/disk1@base

# zfs list
NAME                  USED  AVAIL  REFER  MOUNTPOINT
ldompool             30.2G   104G    21K  /ldompool
ldompool/disk1       30.2G   129G  5.19G  -
ldompool/disk1@base      0      -  5.19G  -

Clone the gold ldom disk image for the new guest domain ld02

# zfs clone ldompool/disk1@base ldompool/disk2
# zfs list
NAME                  USED  AVAIL  REFER  MOUNTPOINT
ldompool             30.2G   104G    21K  /ldompool
ldompool/disk1       30.2G   129G  5.19G  -
ldompool/disk1@base      0      -  5.19G  -
ldompool/disk2           0   104G  5.19G  -

Add a new domain

# ldm add-domain ld02
# ldm set-vcpu 30 ld02
# ldm set-memory 7G ld02
# ldm add-vnet vnet1 primary-vsw0 ld02

# ldm add-vdiskserverdevice /dev/zvol/dsk/ldompool/disk2 vol2@primary-vds0

# ldm add-vdisk vdisk1 vol2@primary-vds0 ld02
# ldm set-variable auto-boot\?=false ld02
# ldm bind-domain ld02
# ldm start-domain ld02

Creating and Installing Solaris Containers (Zones)

Creating and Installing Zone.

This example shows how to create a zone (zone1) on the ZFS pool (zonepool).

# zonecfg -z zone1
zone1: No such zone configured
Use 'create' to begin configuring a new zone.
zonecfg:zone1 > create
zonecfg:zone1 > set zonepath=/zonepool/zone1
zonecfg:zone1 > set autoboot=true
zonecfg:zone1 > add net
zonecfg:zone1:net> set address=10.6.140.137
zonecfg:zone1:net> set physical=vnet0
zonecfg:zone1:net> end
zonecfg:zone1> add net
zonecfg:zone1:net> set address=192.168.2.201
zonecfg:zone1:net> set physical=vnet1
zonecfg:zone1:net> end
zonecfg:zone1> verify
zonecfg:zone1> info
zonename: zone1
zonepath: /zonepool/zone1
brand: native
autoboot: true
bootargs:
pool:
limitpriv:
scheduling-class:
ip-type: shared
inherit-pkg-dir:
    dir: /lib
inherit-pkg-dir:
    dir: /platform
inherit-pkg-dir:
    dir: /sbin
inherit-pkg-dir:
    dir: /usr
net:
    address: 10.6.140.137
    physical: vnet0
    defrouter not specified
net:
    address: 192.168.2.201
    physical: vnet1
    defrouter not specified
zonecfg:zone1>
# chmod 700 /zonepool/zone1

Verify that the zone is configured correctly

# zoneadm list -cv
  ID NAME             STATUS     PATH                           BRAND    IP
   0 global           running    /                              native   shared
zone1   configured /zonepool/zone1 native   shared

Install the operating system in the new zone.

#  zoneadm -z zone1 install
cannot create ZFS dataset zonepool/zone1: dataset already exists
Preparing to install zone .
Creating list of files to copy from the global zone.
Copying <7690> files to the zone.
Initializing zone product registry.
Determining zone package initialization order.
Preparing to initialize <1139> packages on the zone.
Initialized <1139> packages on zone.
Zone  is initialized.
The file  contains a log of the zone installation.