This is a short description of how to set up a VIOS Shared Storage Pool (SSP).
These Virtual I/O Servers will be part of this SSP:
s922-vio1.<your_domain_name>
s922-vio2.<your_domain_name>
s824-vio1.<your_domain_name>
s824-vio2.<your_domain_name>
e870-vio1.<your_domain_name>
e870-vio2.<your_domain_name>
p770plus-vio1.<your_domain_name>
p770plus-vio2.<your_domain_name>
For the storage part I have created on both XIVs (XIV-1 and XIV-2):
All those LUNs have been zoned and mapped to all of the Virtual I/O Servers listed above.
The following picture should explain the setup:
The following commands in this section have all been executed on the Virtual I/O Server s922-vio1.<your_domain_name>
and need to executed on each participating Virtual I/O Server.
After a cfgdev
run the following disks are recognized and hdisk4–hdisk21 are to be used for the SSP.
$ lspv NAME PVID VG STATUS hdisk0 00c11460617c49d2 rootvg active hdisk1 00c114606202da35 rootvg active hdisk2 00c11460e71ba2dc None hdisk3 none None hdisk4 none None hdisk5 none None hdisk6 none None hdisk7 none None hdisk8 none None hdisk9 none None hdisk10 none None hdisk11 none None hdisk12 none None hdisk13 none None hdisk14 none None hdisk15 none None hdisk16 none None hdisk17 none None hdisk18 none None hdisk19 none None hdisk20 none None hdisk21 none None
To make sure all involved disks have the correct parameters set, I ran the following script as root user:
$ cat set-ssp-disk-params.sh #!/bin/sh for i in 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 ; do d="hdisk"$i chdev -l $d -a algorithm=fail_over chdev -l $d -a algorithm=fail_over -P chdev -l $d -a reserve_policy=no_reserve chdev -l $d -a reserve_policy=no_reserve -P chdev -l $d -a hcheck_mode=nonactive chdev -l $d -a hcheck_mode=nonactive -P chdev -l $d -a hcheck_interval=60 chdev -l $d -a hcheck_interval=60 -P chdev -l $d -a max_transfer=0x80000 chdev -l $d -a max_transfer=0x80000 -P done
With the help of the following script you can easily identify the proper hdisks:
$ cat show-ssp-disks.sh #!/bin/sh for i in 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 ; do xiv=`lscfg -l hdisk$i | awk -F '-' '{ print $5 }'` pvid=`lspv | grep hdisk$i | awk '{ print $2 }'` echo "hdisk$i (XIV $xiv), $pvid:" `bootinfo -s hdisk$i` done
Running this script as root user has the following output:
$ ./show-ssp-disks.sh hdisk4 (XIV W5001738006D70140), 00c11460d6de8052: 1015808 hdisk5 (XIV W5001738006D70140), 00c11460d6de8a48: 1015808 hdisk6 (XIV W5001738006D70140), 00c11460d6de975d: 1015808 hdisk7 (XIV W5001738006D70140), 00c11460d6dea5a7: 1015808 hdisk8 (XIV W5001738006D70140), 00c11460d6deae5e: 1015808 hdisk9 (XIV W5001738006D70140), 00c11460d6deb529: 1015808 hdisk10 (XIV W5001738006D70140), 00c11460d6dec11a: 1015808 hdisk11 (XIV W5001738006D70140), 00c11460d6dec8cd: 1015808 hdisk12 (XIV W5001738006D70140), 00c1146067415b35: 16384 hdisk13 (XIV W5001738006D30140), 00c11460d6ded0a3: 1015808 hdisk14 (XIV W5001738006D30140), 00c11460d6def807: 1015808 hdisk15 (XIV W5001738006D30140), 00c11460d6df16a4: 1015808 hdisk16 (XIV W5001738006D30140), 00c11460d6df1ec4: 1015808 hdisk17 (XIV W5001738006D30140), 00c11460d6df2591: 1015808 hdisk18 (XIV W5001738006D30140), 00c11460d6df2d16: 1015808 hdisk19 (XIV W5001738006D30140), 00c11460d6df368f: 1015808 hdisk20 (XIV W5001738006D30140), 00c11460d6df3ef9: 1015808 hdisk21 (XIV W5001738006D30140), 00c11460d6df46b0: 16384
From that output I chose hdisk12
to be the SSP repository disk.
You need to make sure that all participating Virtual I/O Servers have a fully qualified hostname. If not the cluster creation will fail:
$ cluster -create -clustername SSP_Cluster -spname SSP -sppvs hdisk4 hdisk5 hdisk6 hdisk7 hdisk8 hdisk9 hdisk10 hdisk11 -repopvs hdisk12 -hostname s922-vio1.<your_domain_name> Bad Hostname: cluster -create requires the local node hostname. Bad Hostname: Local node hostname needs to be fully qualified name, as resolved. s922-vio1.<your_domain_name>
After changing the hostname to fully qualified the cluster creation is successful.
$ cluster -create -clustername SSP_Cluster -spname SSP -sppvs hdisk4 hdisk5 hdisk6 hdisk7 hdisk8 hdisk9 hdisk10 hdisk11 -repopvs hdisk12 -hostname s922-vio1.<your_domain_name> Cluster SSP_Cluster has been created successfully.
All our hdisks from XIV-1 have been put now into a failure group named Default
.
Now we create the second failure group for all disks from XIV-2 and name it appropriately XIV_2
:
$ failgrp -create -fg XIV_2: hdisk13 hdisk14 hdisk15 hdisk16 hdisk17 hdisk18 hdisk19 hdisk20 XIV_2 FailureGroup has been created successfully.
We rename the Default
failure group to XIV_1
to reflect the fact that it contains only LUNs from XIV-1:
$ failgrp -modify -fg Default -attr fg_name=XIV_1 Given attribute(s) modified successfully.
Now add the remaining SSP cluster nodes:
$ cluster -addnode -clustername SSP_Cluster -hostname s922-vio2.<your_domain_name> Partition s922-vio2.<your_domain_name> has been added to the SSP_Cluster cluster. $ cluster -addnode -clustername SSP_Cluster -hostname s824-vio1.<your_domain_name> Partition s824-vio1.<your_domain_name> has been added to the SSP_Cluster cluster. $ cluster -addnode -clustername SSP_Cluster -hostname s824-vio2.<your_domain_name> Partition s824-vio2.<your_domain_name> has been added to the SSP_Cluster cluster. $ cluster -addnode -clustername SSP_Cluster -hostname e870-vio1.<your_domain_name> Partition e870-vio1.<your_domain_name> has been added to the SSP_Cluster cluster. $ cluster -addnode -clustername SSP_Cluster -hostname e870-vio2.<your_domain_name> Partition e870-vio2.<your_domain_name> has been added to the SSP_Cluster cluster. $ cluster -addnode -clustername SSP_Cluster -hostname p770plus-vio1.<your_domain_name> Partition p770plus-vio1.<your_domain_name> has been added to the SSP_Cluster cluster. $ cluster -addnode -clustername SSP_Cluster -hostname p770plus-vio2.<your_domain_name> Partition p770plus-vio2.<your_domain_name> has been added to the SSP_Cluster cluster.
That's it, our SSP cluster is now ready for clients…
$ cluster -status Cluster Name State SSP_Cluster OK Node Name MTM Partition Num State Pool State s922-vio1 9009-22A067811460 1 OK OK s922-vio2 9009-22A067811460 2 OK OK s824-vio1 8286-42A0206036D6 1 OK OK s824-vio2 8286-42A0206036D6 2 OK OK e870-vio1 9119-MME020647C9R 1 OK OK e870-vio2 9119-MME020647C9R 2 OK OK p770plus-vio1 9117-MMD02103F55E 1 OK OK p770plus-vio2 9117-MMD02103F55E 2 OK OK
After having installed a couple of clients we can check the list of LUNs:
$ lu -list POOL_NAME: SSP TIER_NAME: SYSTEM LU_NAME SIZE(MB) UNUSED(MB) UDID ssp_aix61_1_root 51200 48929 75dc5ce8c93b3ccdce73999bcec544f7 ssp_aix71_1_root 51200 46792 0a5f20caf8aceae38d94e43dfe6d1362 ssp_aix72_1_root 51200 47927 d4e4e4fea801c641582ec725a4b257bf ssp_centos7le_1_root 51200 48245 06432eb57da59a03a2131f5bdcf4dc08 ssp_rhel75le1_1_root 51200 48536 126b7c34b1f32bf561e18e375b80b7f3 ssp_sles12sp3_1_root 51200 41241 30184e6ee901e5aaf2a60254269ca19c ssp_ubuntu1804_1_root 51200 44494 dec343ad4a6543272572fa01bec532c6