CFS Cluster- HP -UX
What you can build with this documentation:Documentation from HP.com
Installation
Install the requiered packages
For a cluster we need at least 2 boxes. So we install HP-UX 11.31 DCOE on both nodes. After that we need to install all the requiered software for CFS / SG Cluster:- Install the Patch PHSS_40152 (included in PHSS_40583)
- Install Service Guard Storage Management Suite Nov2009
# swinstall -s /var/tmp/patches/PHSS_40583.depot \*
# swremove B3929FB # swinstall -s ... Base-VxFS-501 \* # swinstall -s ... Base-VxTools-501 \* # swinstall -s ... Base-VxVM-501 \* # swinstall -s ... EventMonitoring \* # swinstall -s ... FEATURE11i \* # swinstall -s ... OpenSSL \* # swinstall -s ... T8695DB \* # swinstall -s ... VxFS-SDK-501 \* # swinstall -s ... WBEMSvcs \*
Create a cluster
- Check the network settings
- Create the
/etc/cmcluster/cmclnodelist
file - Check the configuration
- Tell Serviceguard to use only IPv4, addresses for the heartbeat, use the -h option
- Create a cluster (use Service Guard Manager from SHM →
http://vm1:2301
) - Don't forget to add a lock disk
- Don't forget to add two Heartbeat networks
- First, make sure the cluster is running:
- Add the Veritas commands to the path
- Initialize the Veritas Volume Manager
- This displays a menu-driven program that steps you through the VxVM/CVM initialization sequence.
- Initializing Disks for CVM
- You need to initialize the physical disks that will be employed
in CVM disk groups. If a physical disk has been previously used with
LVM, you should use the
pvremove
command to delete the LVM header data from all the disks in the volume group. Or use - Activate the SG-CFS-pkg and start up CVM with the cfscluster command; this creates
SG-CFS-pkg
, and also starts it - Start the cluster seperatly from configuring it
- Verify the system multi-node package is running and CVM is up, using the
cmviewcl or
cfscluster command.[vm1]# cfscluster status Node : hpvm10-vm1 Cluster Manager : up CVM state : up MOUNT POINT TYPE SHARED VOLUME DISK GROUP STATUS Node : hpvm10-vm3 Cluster Manager : up CVM state : up (MASTER) MOUNT POINT TYPE SHARED VOLUME DISK GROUP STATUS
- Find the master node using
vxdctl
orcfscluster status
- Initialize a new disk group, or import an existing disk group, in shared mode, using the
vxdg
command - For a new disk group use the init option:
- For an existing disk group, use the import option:
- Verify the disk group. The state should be enabled and shared:
- Use the
cfsdgadm
command to create the packageSG-CFS-DG-ID#
, whereID#
is an automatically incremented number, assigned by Serviceguard - You can verify the package creation with the
cmviewcl
command, or with thecfsdgadm display
command. - Activate the disk group and start up the package
- To verify, you can use
cfsdgadm
orcmviewcl
. - To view the name of the package that is monitoring a disk group, use the
cfsdgadm show_package
command - Make
cfsvol
volume on thecfsdg
disk group - To create a host based mirror over enclosures use the command
- Use the vxprint command to verify
- Create a filesystem
- Create the cluster mount point
- Package name “SG-CFS-MP-1” is generated to control the resource.
- You do not need to create the directory. The command creates one on each of the nodes, during the mount.
- Verify with
cmviewcl
orcfsmntadm
display.[vm3]# cfsmntadm display Cluster Configuration for Node: hpvm10-vm1 MOUNT POINT TYPE SHARED VOLUME DISK GROUP STATUS /cfs regular cfsvol cfsdg MOUNTED Cluster Configuration for Node: hpvm10-vm3 MOUNT POINT TYPE SHARED VOLUME DISK GROUP STATUS /cfs regular cfsvol cfsdg MOUNTED
- Mount the filesystem
- This starts up the multi-node package and mounts a cluster-wide filesystem.
- Verify that multi-node package is running and filesystem is mounted
- To view the package name that is monitoring a mount point, use the
cfsmntadm show_package
command
[both]# vi /etc/hosts 172.16.19.182 hpvm10-vm1 hpvm10-vm1.work 172.16.19.183 hpvm10-vm3 hpvm10-vm3.work
[both]# vi /etc/nsswitch.conf passwd: compat group: compat hosts: files dns ipnodes: files dns networks: files protocols: files rpc: files publickey: files netgroup: files automount: files aliases: files services: files
[vm1]# vi /etc/rc.config.d/netconf INTERFACE_NAME[0]=lan0 IP_ADDRESS[0]=172.16.19.182 SUBNET_MASK[0]=255.255.255.0 BROADCAST_ADDRESS[0]=172.16.19.255 INTERFACE_STATE[0]="" DHCP_ENABLE[0]=1 INTERFACE_MODULES[0]="" INTERFACE_NAME[1]=lan1 IP_ADDRESS[1]=10.5.5.1 SUBNET_MASK[1]=255.255.255.252 BROADCAST_ADDRESS[1]=10.5.5.3 INTERFACE_STATE[1]="" DHCP_ENABLE[1]=0 INTERFACE_MODULES[1]="" INTERFACE_NAME[2]=lan2 IP_ADDRESS[2]=10.6.6.1 SUBNET_MASK[2]=255.255.255.252 BROADCAST_ADDRESS[2]=10.6.6.3 INTERFACE_STATE[2]="" DHCP_ENABLE[2]=0 INTERFACE_MODULES[2]="" [vm3]# vi /etc/rc.config.d/netconf INTERFACE_NAME[0]=lan0 IP_ADDRESS[0]=172.16.19.183 SUBNET_MASK[0]=255.255.255.0 BROADCAST_ADDRESS[0]=172.16.19.255 INTERFACE_STATE[0]="" DHCP_ENABLE[0]=1 INTERFACE_MODULES[0]="" INTERFACE_NAME[1]=lan1 IP_ADDRESS[1]=10.5.5.2 SUBNET_MASK[1]=255.255.255.252 BROADCAST_ADDRESS[1]=10.5.5.3 INTERFACE_STATE[1]="" DHCP_ENABLE[1]=0 INTERFACE_MODULES[1]="" INTERFACE_NAME[2]=lan2 IP_ADDRESS[2]=10.6.6.2 SUBNET_MASK[2]=255.255.255.252 BROADCAST_ADDRESS[2]=10.6.6.3 INTERFACE_STATE[2]="" DHCP_ENABLE[2]=0 INTERFACE_MODULES[2]=""
[both]# vi /etc/cmcluster/cmclnodelist hpvm10-vm1 root hpvm10-vm3 root
[vm1]# cmquerycl -v -C /etc/cmcluster/cmclconf.ascii -n hpvm10-vm1 -n hpvm10-vm3
[vm1]# cmquerycl -v -h ipv4 -C /etc/cmcluster/cmclconf.ascii -n hpvm10-vm1 -n hpvm10-vm3
[vm1]# cmviewcl [vm1]# cmruncl
# cd # vi .profile ... PATH=...:/opt/VRTS/bin ... # . .profile
[both]# vxinstall
[both]# vxdisksetup -i disk14
# dd if=/dev/zero of=/dev/rdisk/disk14 bs=1024 count=10000
[vm1]# cfscluster config -t 900 -s
[vm1]# cfscluster start
[vm3]# cfscluster status
[vm3]# vxdg -s init cfsdg disk14
[vm3]# vxdg -C -s import cfsdg
[vm3#] vxdg list NAME STATE ID cfsdg enabled,shared,cds 1268306669.30.hpvm10-v
[vm3]# cfsdgadm add cfsdg all=sw
[vm3]# cfsdgadm display Node Name : hpvm10-vm1 DISK GROUP ACTIVATION MODE cfsdg off (sw) Node Name : hpvm10-vm3 (MASTER) DISK GROUP ACTIVATION MODE cfsdg off (sw)
[vm3#] cfsdgadm activate cfsdg
[vm3# cfsdgadm display -v cfsdg NODE NAME ACTIVATION MODE hpvm10-vm1 sw (sw) MOUNT POINT SHARED VOLUME TYPE /cfs cfsvol regular hpvm10-vm3 sw (sw) MOUNT POINT SHARED VOLUME TYPE /cfs cfsvol regular
[vm3]# cfsdgadm show_package cfsdg SG-CFS-DG-1
[vm3]# vxassist -g cfsdg make cfsvol 9g
[vm3]# vxassist -g cfsdg make cfsvol 9g layout=mirror mirror=enclr
[vm3]# vxprint cfsvol Disk group: cfsdg TY NAME ASSOC KSTATE LENGTH PLOFFS STATE TUTIL0 PUTIL0 v cfsvol fsgen ENABLED 9437184 - ACTIVE - - pl cfsvol-01 cfsvol ENABLED 9437184 - ACTIVE - - sd disk14-01 cfsvol-01 ENABLED 9437184 0 - - -
[vm3]# newfs -F vxfs /dev/vx/rdsk/cfsdg/cfsvol
[vm3]# cfsmntadm add cfsdg cfsvol /cfs all=rw
[vm3]# cfsmount /cfs
[vm3]# cmviewcl CLUSTER STATUS MyCluster up NODE STATUS STATE hpvm10-vm1 up running hpvm10-vm3 up running MULTI_NODE_PACKAGES PACKAGE STATUS STATE AUTO_RUN SYSTEM SG-CFS-pkg up running enabled yes SG-CFS-DG-1 up running enabled no SG-CFS-MP-1 up running enabled no
[vm3]# cfsmntadm show_package /cfs SG-CFS-MP-1
View the configuration
- View disk group information
# vxprint
# vxlist -g dgxx
# vxlist -al volxx
Another Cookbook
- Create the SG Cluster
- Create and adjust the file
/etc/cmcluster/cmclnodelist
- Adjust the file
/etc/rc.config.d/cmcluster
- Check it
auth
in the file/etc/inetd.conf
is enabled. - Checken the netzwork connections
- checken the connection to the quorum-server or the quorum-disk
- Prepare the cluster config file
- Add the quorum-server to the cluster config file (port of the quorum-server:
1238
) - Edit the file
cmclconf.ascii
- Apply the configuration
- Start the cluster
- Start the CFS-cluster
- Check which node is the CFS-master node # vxdctl -c mode or # cfscluster status
- Create a table of the san disks
- Prepare the disks for the usage under VxVM
- Create the diskgroups
- Create the diskgroup-packages
- Activate the diskgroups
- Create the volumes
- Create the filesystems on the volumes
- Create the mountpoint-packages
- Mount the cluster filesystems
- Adjust the permissions
- Create links if expected
- Create the application-packages
# vi /etc/cmcluster/cmclnodelist
# vi /etc/rc.config.d/cmcluster
# vi /etc/inetd.conf # inted -c
# cmquerycl -v -n <node1> -n <node2> -C /etc/cmlcuster/cmclconf.ascii
# vi /etc/cmcluster/cmclconf.ascii
# cmapplyconf -v -C /etc/cmcluster/cmclconf.ascii
# cmruncl -v
# cfscluster config # cfscluster startor
# cfscluster config -s→ A package named
SG-CFS-pkg
must run as a system-multi-node package
# init_disks.sh
On the cluster-master node CFS-Master:
# create_dgs.sh
# cfsdgadm.sh
# cfsdgactivate.sh
# create_vols.sh
# create_fs.sh
# create_mp.sh
# cfsmount.sh
# create_perms.sh
# create_links.sh
# create_package.sh
cmclconf.ascii
CLUSTER_NAME oracle_cfs_cluster HOSTNAME_ADDRESS_FAMILY IPV4 # CLUSTER_LOCK_LUN /dev/disk/disk4_p2 # QS_HOST qs_host # QS_ADDR qs_addr # QS_POLLING_INTERVAL 120000000 # QS_TIMEOUT_EXTENSION 2000000 #FIRST_CLUSTER_LOCK_VG NODE_NAME node1 NETWORK_INTERFACE lan1 HEARTBEAT_IP 192.168.1.141 NETWORK_INTERFACE lan6 HEARTBEAT_IP 192.168.2.141 NETWORK_INTERFACE lan900 STATIONARY_IP 10.238.87.141 CLUSTER_LOCK_LUN /dev/disk/disk148 NODE_NAME node2 NETWORK_INTERFACE lan1 HEARTBEAT_IP 192.168.1.41 NETWORK_INTERFACE lan6 HEARTBEAT_IP 192.168.2.41 NETWORK_INTERFACE lan900 STATIONARY_IP 10.238.87.41 CLUSTER_LOCK_LUN /dev/disk/disk146 MEMBER_TIMEOUT 14000000 AUTO_START_TIMEOUT 600000000 NETWORK_POLLING_INTERVAL 2000000 # CONFIGURED_IO_TIMEOUT_EXTENSION 0 NETWORK_FAILURE_DETECTION INOUT NETWORK_AUTO_FAILBACK YES SUBNET 10.238.87.0 IP_MONITOR OFF MAX_CONFIGURED_PACKAGES 300 # WEIGHT_NAME # WEIGHT_DEFAULT USER_NAME ANY_USER USER_HOST ANY_SERVICEGUARD_NODE USER_ROLE MONITOR # VOLUME_GROUP /dev/vgdatabase # VOLUME_GROUP /dev/vg02
init_disks.sh
vxdisksetup -i disk60 vxdisksetup -i disk25 vxdisksetup -i disk65 vxdisksetup -i disk27 vxdisksetup -i disk66 vxdisksetup -i disk28 vxdisksetup -i disk67 vxdisksetup -i disk68 vxdisksetup -i disk69 vxdisksetup -i disk70 ...
create_dgs.sh
vxdg -s init dglog disk60_loc1=disk60 vxdg -g dglog adddisk disk25_loc2=disk25 vxdg -s init dgcfs disk65_loc1=disk65 vxdg -g dgcfs adddisk disk27_loc2=disk27 vxdg -s init dgoracle disk66_loc1=disk66 vxdg -g dgoracle adddisk disk28_loc2=disk28 vxdg -s init dgbackup disk67_loc1=disk67 vxdg -g dgbackup adddisk disk68_loc1=disk68 vxdg -g dgbackup adddisk disk69_loc1=disk69 vxdg -g dgbackup adddisk disk70_loc1=disk70 vxdg -g dgbackup adddisk disk71_loc1=disk71 vxdg -g dgbackup adddisk disk72_loc1=disk72 vxdg -g dgbackup adddisk disk29_loc2=disk29 ...
cfsdgadm.sh
cfsdgadm add dglog all=sw cfsdgadm add dgcfs all=sw cfsdgadm add dgoracle all=sw cfsdgadm add dgbackup all=sw cfsdgadm add dgoradata all=sw cfsdgadm add dgoradata0 all=sw cfsdgadm add dgoradata1 all=sw cfsdgadm add dgoradata2 all=sw cfsdgadm add dgoradata3 all=sw cfsdgadm add dgoradata4 all=sw
cfsdgactivate.sh
cfsdgadm activate dglog cfsdgadm activate dgcfs cfsdgadm activate dgoracle cfsdgadm activate dgbackup cfsdgadm activate dgoradata cfsdgadm activate dgoradata0 cfsdgadm activate dgoradata1 cfsdgadm activate dgoradata2 cfsdgadm activate dgoradata3 cfsdgadm activate dgoradata4
create_vols.sh
vxassist -g dglog make vollog 56g layout=mirror,nolog mirror=enclr vxassist -g dglog make volscratch 4g layout=mirror,nolog mirror=enclr vxassist -g dgcfs make volprod 4g layout=mirror,nolog mirror=enclr vxassist -g dgcfs make volhome 4g layout=mirror,nolog mirror=enclr vxassist -g dgcfs make voldata 4g layout=mirror,nolog mirror=enclr vxassist -g dgoracle make volora 20g layout=mirror,nolog mirror=enclr vxassist -g dgbackup make volbackup 600g layout=mirror,nolog mirror=enclr vxassist -g dgoradata make voloradata 400g layout=mirror,nolog mirror=enclr vxassist -g dgoradata0 make voloradata0 180g layout=mirror,nolog mirror=enclr vxassist -g dgoradata1 make voloradata1 140g layout=mirror,nolog mirror=enclr vxassist -g dgoradata2 make voloradata2 200g layout=mirror,nolog mirror=enclr vxassist -g dgoradata3 make voloradata3 240g layout=mirror,nolog mirror=enclr vxassist -g dgoradata4 make voloradata4 60g layout=mirror,nolog mirror=enclr
create_fs.sh
newfs -F vxfs -o largefiles -b 8192 /dev/vx/rdsk/dgbackup/volbackup newfs -F vxfs -o largefiles -b 8192 /dev/vx/rdsk/dgcfs/voldata newfs -F vxfs -o largefiles -b 8192 /dev/vx/rdsk/dgcfs/volhome newfs -F vxfs -o largefiles -b 8192 /dev/vx/rdsk/dgcfs/volprod newfs -F vxfs -o largefiles -b 8192 /dev/vx/rdsk/dglog/vollog newfs -F vxfs -o largefiles -b 8192 /dev/vx/rdsk/dglog/volscratch newfs -F vxfs -o largefiles -b 8192 /dev/vx/rdsk/dgoracle/volora newfs -F vxfs -o largefiles -b 8192 /dev/vx/rdsk/dgoradata/voloradata newfs -F vxfs -o largefiles -b 8192 /dev/vx/rdsk/dgoradata0/voloradata0 newfs -F vxfs -o largefiles -b 8192 /dev/vx/rdsk/dgoradata1/voloradata1 newfs -F vxfs -o largefiles -b 8192 /dev/vx/rdsk/dgoradata2/voloradata2 newfs -F vxfs -o largefiles -b 8192 /dev/vx/rdsk/dgoradata3/voloradata3 newfs -F vxfs -o largefiles -b 8192 /dev/vx/rdsk/dgoradata4/voloradata4
create_mp.sh
cfsmntadm add dgbackup volbackup /cfs/appl/backup all=rw cfsmntadm add dgcfs voldata /cfs/data/data2 all=rw cfsmntadm add dgcfs volhome /cfs/apphome all=rw cfsmntadm add dgcfs volprod /cfs/prod all=rw cfsmntadm add dglog vollog /cfs/log all=rw cfsmntadm add dglog volscratch /cfs/scratch all=rw cfsmntadm add dgoracle volora /cfs/appl/oracle all=rw cfsmntadm add dgoradata voloradata /cfs/oradata/expimp all=rw cfsmntadm add dgoradata0 voloradata0 /cfs/oradata/part00 all=rw cfsmntadm add dgoradata1 voloradata1 /cfs/oradata/part01 all=rw cfsmntadm add dgoradata2 voloradata2 /cfs/oradata/part02 all=rw cfsmntadm add dgoradata3 voloradata3 /cfs/oradata/part03 all=rw cfsmntadm add dgoradata4 voloradata4 /cfs/oradata/part04 all=rw
cfsmount.sh
cfsmount /cfs/appl/backup cfsmount /cfs/data/data2 cfsmount /cfs/apphome cfsmount /cfs/prod cfsmount /cfs/log cfsmount /cfs/scratch cfsmount /cfs/appl/oracle cfsmount /cfs/oradata/expimp cfsmount /cfs/oradata/part00 cfsmount /cfs/oradata/part01 cfsmount /cfs/oradata/part02 cfsmount /cfs/oradata/part03 cfsmount /cfs/oradata/part04
create_perms.sh
chown root:root /cfs chmod -R 755 /cfs chown oracle:oinstall /cfs/volbackup chown root:root /cfs/voldata chown root:root /cfs/volhome chown root:root /cfs/vollog chown oracle:oinstall /cfs/volora chown oracle:oinstall /cfs/voloradata chown oracle:oinstall /cfs/voloradata0 chown oracle:oinstall /cfs/voloradata1 chown oracle:oinstall /cfs/voloradata2 chown oracle:oinstall /cfs/voloradata3 chown oracle:oinstall /cfs/voloradata4 chown root:root /cfs/volprod chown root:root /cfs/volscratch
create_links.sh
ln -sf /cfs/log /log ln -sf /cfs/prod /prod ln -sf /cfs/apphome /apphome ln -sf /cfs/data/data2 /data/data2 ln -sf /cfs/scratch /scratch
Create an application package
- Create a package with the given modules
- Adjust the config-files # vi /etc/cmcluster/oracle/oracle.config
- Create a start/stop-script
- Adjust the
pkg_oracle.sh
Functions:-
validate_command<code> is executed while the <code>cmapplyconf
command. -
start_command
is exeduted while the package startup. -
stop_command
is executed while the packege stops.
-
- To enable package environment variables
- In the package conifg file use
- In the start/stop-script use
- Activate the package
- Start the package
# cmmakepkg -m sg/basic -m sg/dependency -m sg/external -m sg/failover -m sg/monitor_subnet \ -m sg/package_ip -m sg/pev /etc/cmcluster/oracle/oracle.config
# cp /etc/cmcluster/examples/*.template /cfs/appl/oracle/cluster/pkg_oracle.sh
pev_<name>
$PEV_<NAME>
# cmapplyconf -P /etc/cmcluster/oracle/oracle.config
# cmmodpkg -e oracleor
# cmrunpkg -v -n <node> oracle
oracle.config
package_name oracle package_description "Serviceguard Package for ORACLE" module_name sg/basic module_version 1 module_name sg/priority module_version 1 module_name sg/dependency module_version 1 module_name sg/external module_version 1 module_name sg/failover module_version 1 module_name sg/monitor_subnet module_version 1 module_name sg/package_ip module_version 1 module_name sg/pev module_version 1 package_type failover node_name node1 node_name node2 auto_run yes node_fail_fast_enabled no run_script_timeout 600 halt_script_timeout 600 successor_halt_timeout no_timeout script_log_file $SGCONF/oracle/oracle.log operation_sequence $SGCONF/scripts/sg/package_ip.sh operation_sequence $SGCONF/scripts/sg/external.sh #log_level failover_policy configured_node failback_policy manual priority no_priority dependency_name MP7 dependency_condition SG-CFS-MP-7 = up dependency_location same_node dependency_name MP8 dependency_condition SG-CFS-MP-8 = up dependency_location same_node dependency_name MP9 dependency_condition SG-CFS-MP-9 = up dependency_location same_node dependency_name MP10 dependency_condition SG-CFS-MP-10 = up dependency_location same_node dependency_name MP11 dependency_condition SG-CFS-MP-11 = up dependency_location same_node dependency_name MP12 dependency_condition SG-CFS-MP-12 = up dependency_location same_node dependency_name MP13 dependency_condition SG-CFS-MP-13 = up dependency_location same_node external_script /cfs/appl/oracle/cluster/pkg_oracle.sh local_lan_failover_allowed yes monitored_subnet 10.238.87.0 #cluster_interconnect_subnet ip_subnet 10.238.87.0 ip_address 10.238.87.143 #pev_
0 comments:
Post a Comment