SAN Overview – Need for SAN Storage

1. The amount of data that’s being generated & stored on the server is significantly increased with the evolution of Servers & Applications. For the administrator, it’s always a challenge to maintain the data locally on the storage (which is considerably less) attached to Servers/Nodes.

The option what we have in such cases is that:

  • Add more disks to the server.
  • Manually delete (or move the files to some temporary location) the old or less important files from the server to free up space.

2. In the former approach, there will be a limitation on the number of disk bays/slots on the physical server and whereas in the latter approach need to depend on the manual approach for cleaning the storage.

3. In order to avoid such scenarios related to storage, we can go with SAN Storage Technology.

4. In most common context, SAN is a block storage device“.

5. SAN is a specialized high-speed network of storage devices and switches connected to computer systems.

6. A SAN presents shared pools of storage devices to multiple servers. Each server can access the storage as if it were directly attached to that server. A SAN supports centralized storage management.

7. SANs make it possible to move data between various storage devices, share data between multiple servers, and backup and restore data rapidly and efficiently.

8. In addition, a properly configured SAN facilitates both disaster recovery and high availability.

9. The physical components of a SAN can be grouped in a single rack or data center or connected over long distances. This makes a SAN a feasible solution for businesses of any size: the SAN can grow easily with the business it supports.

10. A SAN generally consists of three core components; therefore, SAN architecture is composed of:

  • Hosts (or Node or Initiator): These are the system/end devices that use the SAN services. This can include servers and computers on the network.
  • Fabric: This consists of the interfaces such as fibre channel and host bus adapter (SAN card) that enable connectivity between the hosts and SAN infrastructure.
  • Storage: This is the physical storage drives.

SAN Terminology:

1. Logical Unit Number(LUN): Logical representation of a disk that will be presented to a host/node.

2. Initiator: The client/Node/Host.

3. Target: The storage system.

4. WWNN: World Wide Node Name, a unique vendor identifier assigned to a node in the storage network.

5. WWPN: World Wide Port Name assigned to every individual port on a node. Equivalent to MAC-address in Ethernet.

6. HBA: Host Bus Adapter (SAN Card) attached on both SAN Servers & Hosts.

7. Zoning: Configured on the SAN Fabric Switch which prevents unauthorized hosts from reaching the storage system.

8. LUN Masking: Configured on the storage system to lock a LUN down to the host who is authorized to access it.

Sample SAN Topology

 

In the above topology, we have the following:

1. Check Point Smart-1 3150 device with SAN Card (SAN card vendor is EMULEX) with 2 transceivers.

2. Two SAN switches in the SAN Fabric labeled as FABRIC-A & FABRIC-B.

3. A SAN Storage with 2 controllers (or HBA), each containing 2 transceivers connected in a criss-cross manner to SAN switches.

4. A LUN of 16TB to be assigned to the Smart-1 (node) device.

Lab Setup Details

Appliance Model CP Smart-1 3150
On-Device Storage 6 * 2TB in RAID6
Operating System GAiA
Version R80.10
Setup Multi-Domain Server (MDS) with 3 Domains
SAN Storage Vendor NetApp

Requirement

Integrate a SAN Storage block (of 16TB) on to the Smart-1 3150 appliance and redirect the traffic logs to it.

Procedure

This requirement involves 3 steps:

  • Readiness on Node Level
  • Readiness on SAN Storage Server Level
  • Scanning for Hardware Changes on Smart-1 (Node)
  • Enabling Multi-Path
  • Redirecting Traffic Logs to SAN Storage

Readiness – On Smart-1 (Node) Level

1. Smart-1 3150 device current HDD layout. Six 2TB HDD in RAID6 yields 8TB storage on the device.

2. Turn-off the Smart-1 appliance and unplug the power cables.

3. Insert the SAN card along with the transceivers supplied.

4. Plug the power cables and turn-on the smart-1 appliance.

5. Connect the Smart-1 SAN module to the SAN Switch (If multipath to be configured, then connect a redundant link to the SAN Switch).

C:\Users\User\Desktop\SAN CARD DOC\IMG_2745-1.jpg

Readiness – On SAN Storage Server Level

1. Check Point uses SAN Card from the vendor EMULEX.

2. Identify the WWPN of the connected Smart-1 devices which is of the format xx:xx:xx:xx:xx:xx:xx:xx

3. You will be able to see the 2 initiator WWPN values on the SAN Storage dashboard due to the fact that Smart-1 device is having 2 transceivers in the HBA.

C:\Users\User\Desktop\SAN CARD DOC\SAN Related Stiff\PHOTO-2018-06-28-18-07-201111111.jpg

The two WWPNs belongs to EMULEX vendor.

C:\Users\User\Desktop\SAN CARD DOC\SAN Related Stiff\ScreenShot00247333.jpg

4. Create a LUN block of 16TB (or as per your requirement) on the SAN Storage Array.

Scanning for Hardware Changes on Smart-1 (Node)

1. Once storage block is assigned on the SAN Storage Server to the Smart-1 (Node) appliance, either scan for the newly assigned SAN storage block on the Smart-1 appliance or take a reboot.

2. Run the command fdisk -l or /proc/partitions to see the SAN storage block are mapped to the Smart-1 device. There are 4 controllers (cables) between the Smart-1 device & the SAN storage and hence you able to see the 4 disks of 16TB capacity.

C:\Users\User\Desktop\SAN CARD DOC\Croppped\ScreenShot00197.jpg

C:\Users\User\Desktop\SAN CARD DOC\Croppped\ScreenShot00199.jpg

3. From the previous command output, we can make out the 4 controllers (sdh, sdi, sdj & sdk) mapped to the Smart-1 device which leads to my SAN storage server (NETAPP) via a SAN switch.

4. Here, there are 4 I/O paths (cables) between the Smart-1 node & the SAN storage. These I/O paths are physical SAN connections that can include separate cables, switches, and controllers. The Multipath feature will address this concern.

Enabling Multi-Path on Smart-1

1. Device Mapper Multipathing (DM-Multipath) allows you to configure multiple I/O paths between server nodes and storage arrays into a single device. Multipathing aggregates the I/O paths, creating a new device that consists of the aggregated paths.

2. By default, the Multipath feature will be disabled.

3. Multipath properties are stored in /etc/multipath.conf file. The content of /etc/multipath.conf file,

C:\Users\User\Desktop\SAN CARD DOC\Croppped\ScreenShot00201.jpg

In order to enable the multipath, take a backup of the original /etc/multipath.conf file and comment out the section (blacklist) highlighted.

C:\Users\User\Desktop\SAN CARD DOC\Croppped\ScreenShot00203.jpg

4. Start the DM-Multipath feature with the below commands:

i) Load the ‘dm-multipath’ driver:

[Expert@HostName]# modprobe dm-multipath

ii) Start the ‘multipathd’ service:

[Expert@HostName]# service multipathd start

iii) Configure the ‘multipathd’ service to be started at each boot:

[Expert@HostName]# chkconfig multipathd on

[Expert@HostName]# chkconfig –list multipathd

C:\Users\User\Desktop\SAN CARD DOC\Croppped\ScreenShot00206.jpg

5. Check the current multipath configuration/topology,

[Expert@HostName]# multipath -l

C:\Users\User\Desktop\SAN CARD DOC\Croppped\ScreenShot00207.jpg

With the above command, we have mpath5 which is the multipath device mapper on dm-6 disk.

6. Now, let’s have a look on the device controllers / partitions list using fdisk -l.

C:\Users\User\Desktop\SAN CARD DOC\Croppped\ScreenShot00208.jpg

We have device mapper with 16TB storage on disk /dev/dm-6 and using this we can access the SAN storage.

7. Partition the SAN Storage multipath disk using the GNU Parted tool using the below procedure:

C:\Users\User\Desktop\SAN CARD DOC\Croppped\ScreenShot00215.jpg

8. Run fdisk -l command to check the status of the multipath device mapper disk. A new partition is created on the dm-6 disk with the name dm-6p1.

C:\Users\User\Desktop\SAN CARD DOC\Croppped\ScreenShot00216.jpg

9. The multipath device mapper’s new partition dm-6p1 will be stored under /dev/mapper directory.

C:\Users\User\Desktop\SAN CARD DOC\Croppped\ScreenShot00217.jpg

10. Create a file-system on the new partition of type EXT3 (this process will take 15-20mins and don’t interrupt the process).

C:\Users\User\Desktop\SAN CARD DOC\Croppped\4.png

11. Check the block device attributes using blkid command-line utility. Note down the UUID for the multipath SAN Storage disk (UUID will be used to edit the /etc/fstab file later).

C:\Users\User\Desktop\SAN CARD DOC\Croppped\5.png

12. Mount the new partition dm-6p1 to some directory.

C:\Users\User\Desktop\SAN CARD DOC\Croppped\ScreenShot00219.jpg

13. Check whether the SAN storage block is reflecting.

C:\Users\User\Desktop\SAN CARD DOC\Croppped\ScreenShot00219-1.jpg

14. In order to consider this new partition during the Operating System Boot, enter the details of this partition to the /etc/fstab file. Current content of /etc/fstab file:

C:\Users\User\Desktop\SAN CARD DOC\Croppped\ScreenShot00221.jpg

Edit the file and add the attributes of the new partition in the below format:

UUID=<UUID_Value> <Mounting_Point> ext3 defaults 0 0

C:\Users\User\Desktop\SAN CARD DOC\Croppped\ScreenShot00232.jpg

15. Take a reboot of Smart-1 device and check the changes (mount point & SAN storage in df -kh command) are reflecting even after reboot.

Redirecting the Logs to SAN Storage

1. Download the Smart-1_SAN_logs_redirection.zip file from the Check Point User Centre with the below link:

http://downloads.checkpoint.com/dc/download.htm?ID=10636

2. Extract the Smart-1_SAN_logs_redirection.zip file and copy the two scripts linklogs.sh & mdsadd_customer to some directory on the Smart-1 device.

C:\Users\User\Desktop\SAN CARD DOC\Croppped\ScreenShot00227.jpg

3. Copy the linklogs.sh script to the $MDSDIR/scripts directory. Make the file executable by running the command:

# chmod +x $MDSDIR/scripts/linklogs.sh

C:\Users\User\Desktop\SAN CARD DOC\Croppped\ScreenShot00229.jpg

4. Start the logs redirecting process by running the script (script will stop the mds processes during its execution),

# linklogs.sh all move <target_dir>

C:\Users\User\Desktop\SAN CARD DOC\Croppped\ScreenShot00234.jpg

C:\Users\User\Desktop\SAN CARD DOC\Croppped\ScreenShot00235.jpg

Note: If you want to redirect the logs to a new directory without copying the current logs to the target directory, use:

# linklogs all link <target_dir>

This command can be especially useful if there is a problem with the storage server and you want to redirect the logs back to the Smart-1 appliance.

5. In order to redirect the logs to the target directory ( /san in my case ) for the newly created Domain Management Server, Copy the mdsadd_customer script to $MDSDIR/scripts directory.

C:\Users\User\Desktop\SAN CARD DOC\Croppped\ScreenShot00239.jpg

C:\Users\User\Desktop\SAN CARD DOC\Croppped\ScreenShot00240.jpg

6. Start the MDS processes:

C:\Users\User\Desktop\SAN CARD DOC\Croppped\ScreenShot00236.jpg

7. Check the MDS processes state and the logs are being redirected to the SAN Storage.

C:\Users\User\Desktop\SAN CARD DOC\Croppped\ScreenShot00237.jpg

Limitations

1. fdisk utility won’t support to create a partition with a disk above 2TB. Instead use GNU Parted utility.

2. Multipath feature is only supported in following scenarios:

Appliance Model Check Point OS & Version
Smart-1 225 GAiA R77.10 & Above
Smart-1 3050
Smart-1 3150

 

 

Raghu K

Raghu K

Senior Network Security Engineer at QOS Technology
Raghu K

Latest posts by Raghu K (see all)

    Leave a Reply

    Be the First to Comment!

    Notify of
    avatar
    wpDiscuz