HPUX-Cluster [CFSCluster]

CFS Cluster- HP -UX

What you can build with this documentation:
MyCluster.png
Documentation from HP.com

Installation

Install the requiered packages

For a cluster we need at least 2 boxes. So we install HP-UX 11.31 DCOE on both nodes. After that we need to install all the requiered software for CFS / SG Cluster:
  1. Install the Patch PHSS_40152 (included in PHSS_40583)
  2. # swinstall -s /var/tmp/patches/PHSS_40583.depot \*
    
  3. Install Service Guard Storage Management Suite Nov2009
  4. # swremove B3929FB
    # swinstall -s ... Base-VxFS-501 \*
    # swinstall -s ... Base-VxTools-501 \*
    # swinstall -s ... Base-VxVM-501 \*
    # swinstall -s ... EventMonitoring \*
    # swinstall -s ... FEATURE11i \*
    # swinstall -s ... OpenSSL \*
    # swinstall -s ... T8695DB \*
    # swinstall -s ... VxFS-SDK-501 \*
    # swinstall -s ... WBEMSvcs \*
    

Create a cluster

  1. Check the network settings
  2. [both]# vi /etc/hosts
    172.16.19.182   hpvm10-vm1 hpvm10-vm1.work
    172.16.19.183   hpvm10-vm3 hpvm10-vm3.work
    
    [both]# vi /etc/nsswitch.conf
    passwd:       compat
    group:        compat
    hosts:        files dns
    ipnodes:      files dns
    networks:     files
    protocols:    files
    rpc:          files
    publickey:    files
    netgroup:     files
    automount:    files
    aliases:      files
    services:     files
    
    [vm1]# vi /etc/rc.config.d/netconf
    INTERFACE_NAME[0]=lan0
    IP_ADDRESS[0]=172.16.19.182
    SUBNET_MASK[0]=255.255.255.0
    BROADCAST_ADDRESS[0]=172.16.19.255
    INTERFACE_STATE[0]=""
    DHCP_ENABLE[0]=1
    INTERFACE_MODULES[0]=""
    
    INTERFACE_NAME[1]=lan1
    IP_ADDRESS[1]=10.5.5.1
    SUBNET_MASK[1]=255.255.255.252
    BROADCAST_ADDRESS[1]=10.5.5.3
    INTERFACE_STATE[1]=""
    DHCP_ENABLE[1]=0
    INTERFACE_MODULES[1]=""
    
    INTERFACE_NAME[2]=lan2
    IP_ADDRESS[2]=10.6.6.1
    SUBNET_MASK[2]=255.255.255.252
    BROADCAST_ADDRESS[2]=10.6.6.3
    INTERFACE_STATE[2]=""
    DHCP_ENABLE[2]=0
    INTERFACE_MODULES[2]=""
    
    [vm3]# vi /etc/rc.config.d/netconf
    INTERFACE_NAME[0]=lan0
    IP_ADDRESS[0]=172.16.19.183
    SUBNET_MASK[0]=255.255.255.0
    BROADCAST_ADDRESS[0]=172.16.19.255
    INTERFACE_STATE[0]=""
    DHCP_ENABLE[0]=1
    INTERFACE_MODULES[0]=""
    
    INTERFACE_NAME[1]=lan1
    IP_ADDRESS[1]=10.5.5.2
    SUBNET_MASK[1]=255.255.255.252
    BROADCAST_ADDRESS[1]=10.5.5.3
    INTERFACE_STATE[1]=""
    DHCP_ENABLE[1]=0
    INTERFACE_MODULES[1]=""
    
    INTERFACE_NAME[2]=lan2
    IP_ADDRESS[2]=10.6.6.2
    SUBNET_MASK[2]=255.255.255.252
    BROADCAST_ADDRESS[2]=10.6.6.3
    INTERFACE_STATE[2]=""
    DHCP_ENABLE[2]=0
    INTERFACE_MODULES[2]=""
    
  3. Create the /etc/cmcluster/cmclnodelist file
  4. [both]# vi /etc/cmcluster/cmclnodelist
    hpvm10-vm1 root
    hpvm10-vm3 root
    
  5. Check the configuration
  6. [vm1]# cmquerycl -v -C /etc/cmcluster/cmclconf.ascii -n hpvm10-vm1 -n hpvm10-vm3 
    
  7. Tell Serviceguard to use only IPv4, addresses for the heartbeat, use the -h option
  8. [vm1]# cmquerycl -v -h ipv4 -C /etc/cmcluster/cmclconf.ascii -n hpvm10-vm1 -n hpvm10-vm3 
    
  9. Create a cluster (use Service Guard Manager from SHM → http://vm1:2301)
  10. SG Manager.png
    • Don't forget to add a lock disk
    • Don't forget to add two Heartbeat networks
  11. First, make sure the cluster is running:
  12. [vm1]# cmviewcl
    [vm1]# cmruncl
    
  13. Add the Veritas commands to the path
  14. # cd
    # vi .profile
    ...
    PATH=...:/opt/VRTS/bin
    ...
    # . .profile
    
  15. Initialize the Veritas Volume Manager
  16. [both]# vxinstall
    
    • This displays a menu-driven program that steps you through the VxVM/CVM initialization sequence.
  17. Initializing Disks for CVM
  18. [both]# vxdisksetup -i disk14
    
    • You need to initialize the physical disks that will be employed in CVM disk groups. If a physical disk has been previously used with LVM, you should use the pvremove command to delete the LVM header data from all the disks in the volume group. Or use
    # dd if=/dev/zero of=/dev/rdisk/disk14 bs=1024 count=10000
    
  19. Activate the SG-CFS-pkg and start up CVM with the cfscluster command; this creates SG-CFS-pkg, and also starts it
  20. [vm1]# cfscluster config -t 900 -s
    
  21. Start the cluster seperatly from configuring it
  22. [vm1]# cfscluster start
    
  23. Verify the system multi-node package is running and CVM is up, using the cmviewcl or cfscluster command.
    [vm1]# cfscluster status
    Node            : hpvm10-vm1
      Cluster Manager : up
      CVM state       : up
      MOUNT POINT  TYPE     SHARED VOLUME  DISK GROUP  STATUS   
                                                                
      Node            : hpvm10-vm3
      Cluster Manager : up
      CVM state       : up (MASTER)
      MOUNT POINT  TYPE     SHARED VOLUME  DISK GROUP  STATUS   
    
  24. Find the master node using vxdctl or cfscluster status
  25. [vm3]# cfscluster status
    
  26. Initialize a new disk group, or import an existing disk group, in shared mode, using the vxdg command
    • For a new disk group use the init option:
    [vm3]# vxdg -s init cfsdg disk14
    
    • For an existing disk group, use the import option:
    [vm3]# vxdg -C -s import cfsdg
    
  27. Verify the disk group. The state should be enabled and shared:
  28. [vm3#] vxdg list
    NAME         STATE           ID
    cfsdg        enabled,shared,cds   1268306669.30.hpvm10-v
    
  29. Use the cfsdgadm command to create the package SG-CFS-DG-ID#, where ID# is an automatically incremented number, assigned by Serviceguard
  30. [vm3]# cfsdgadm add cfsdg all=sw
    
  31. You can verify the package creation with the cmviewcl command, or with the cfsdgadm display command.
  32. [vm3]# cfsdgadm display
      Node Name : hpvm10-vm1 
      DISK GROUP            ACTIVATION MODE
      cfsdg         off  (sw)
    
      Node Name : hpvm10-vm3 (MASTER)
      DISK GROUP            ACTIVATION MODE
      cfsdg         off  (sw)
    
  33. Activate the disk group and start up the package
  34. [vm3#] cfsdgadm activate cfsdg
    
  35. To verify, you can use cfsdgadm or cmviewcl.
  36. [vm3# cfsdgadm display -v cfsdg
      NODE NAME    ACTIVATION MODE             
      hpvm10-vm1   sw (sw)                     
      MOUNT POINT  SHARED VOLUME    TYPE       
      /cfs         cfsvol           regular    
                                               
      hpvm10-vm3   sw (sw)                     
      MOUNT POINT  SHARED VOLUME    TYPE       
      /cfs         cfsvol           regular
    
  37. To view the name of the package that is monitoring a disk group, use the cfsdgadm show_package command
  38. [vm3]# cfsdgadm show_package cfsdg
    SG-CFS-DG-1
    
  39. Make cfsvol volume on the cfsdg disk group
  40. [vm3]# vxassist -g cfsdg make cfsvol 9g
    
  41. To create a host based mirror over enclosures use the command
  42. [vm3]# vxassist -g cfsdg make cfsvol 9g layout=mirror mirror=enclr
    
  43. Use the vxprint command to verify
  44. [vm3]# vxprint cfsvol
    Disk group: cfsdg
    
    TY NAME         ASSOC        KSTATE   LENGTH   PLOFFS   STATE    TUTIL0  PUTIL0
    v  cfsvol       fsgen        ENABLED  9437184  -        ACTIVE   -       -
    pl cfsvol-01    cfsvol       ENABLED  9437184  -        ACTIVE   -       -
    sd disk14-01    cfsvol-01    ENABLED  9437184  0        -        -       -
    
  45. Create a filesystem
  46. [vm3]# newfs -F vxfs /dev/vx/rdsk/cfsdg/cfsvol
    
  47. Create the cluster mount point
  48. [vm3]# cfsmntadm add cfsdg cfsvol /cfs all=rw
    
    • Package name “SG-CFS-MP-1” is generated to control the resource.
    • You do not need to create the directory. The command creates one on each of the nodes, during the mount.
  49. Verify with cmviewcl or cfsmntadm display.
    [vm3]# cfsmntadm display
      Cluster Configuration for Node: hpvm10-vm1
      MOUNT POINT  TYPE     SHARED VOLUME  DISK GROUP  STATUS   
      /cfs         regular  cfsvol         cfsdg       MOUNTED  
                                                                
      Cluster Configuration for Node: hpvm10-vm3
      MOUNT POINT  TYPE     SHARED VOLUME  DISK GROUP  STATUS   
      /cfs         regular  cfsvol         cfsdg       MOUNTED
    
  50. Mount the filesystem
  51. [vm3]# cfsmount /cfs
    
    • This starts up the multi-node package and mounts a cluster-wide filesystem.
  52. Verify that multi-node package is running and filesystem is mounted
  53. [vm3]# cmviewcl
    CLUSTER        STATUS       
    MyCluster      up           
      
      NODE           STATUS       STATE        
      hpvm10-vm1     up           running      
      hpvm10-vm3     up           running      
      
    MULTI_NODE_PACKAGES
    
      PACKAGE        STATUS           STATE            AUTO_RUN    SYSTEM      
      SG-CFS-pkg     up               running          enabled     yes         
      SG-CFS-DG-1    up               running          enabled     no          
      SG-CFS-MP-1    up               running          enabled     no
    
  54. To view the package name that is monitoring a mount point, use the cfsmntadm show_package command
  55. [vm3]# cfsmntadm show_package /cfs
    SG-CFS-MP-1
    

View the configuration

  • View disk group information
  • # vxprint
    
  • List information about a diskgroup
  • # vxlist -g dgxx
    
  • View created mirrors
  • # vxlist -al volxx
    

Another Cookbook

  1. Create the SG Cluster
    • Create and adjust the file /etc/cmcluster/cmclnodelist
    # vi /etc/cmcluster/cmclnodelist
    
    • Adjust the file /etc/rc.config.d/cmcluster
    # vi /etc/rc.config.d/cmcluster
    
    • Check it auth in the file /etc/inetd.conf is enabled.
    # vi /etc/inetd.conf
    # inted -c
    
    • Checken the netzwork connections
    • checken the connection to the quorum-server or the quorum-disk
    • Prepare the cluster config file
    # cmquerycl -v -n <node1> -n <node2> -C /etc/cmlcuster/cmclconf.ascii
    
    • Add the quorum-server to the cluster config file (port of the quorum-server: 1238)
  2. Edit the file cmclconf.ascii
  3. # vi /etc/cmcluster/cmclconf.ascii
    
  4. Apply the configuration
  5. # cmapplyconf -v -C /etc/cmcluster/cmclconf.ascii
    
  6. Start the cluster
  7. # cmruncl -v
    
  8. Start the CFS-cluster
  9. # cfscluster config
    # cfscluster start
    
    or
    # cfscluster config -s
    
    → A package named SG-CFS-pkg must run as a system-multi-node package
  10. Check which node is the CFS-master node # vxdctl -c mode or # cfscluster status
  11. Create a table of the san disks
  12. Prepare the disks for the usage under VxVM
  13. # init_disks.sh
    

    On the cluster-master node CFS-Master:
  14. Create the diskgroups
  15. # create_dgs.sh
    
  16. Create the diskgroup-packages
  17. # cfsdgadm.sh
    
  18. Activate the diskgroups
  19. # cfsdgactivate.sh
    
  20. Create the volumes
  21. # create_vols.sh
    
  22. Create the filesystems on the volumes
  23. # create_fs.sh
    
  24. Create the mountpoint-packages
  25. # create_mp.sh
    
  26. Mount the cluster filesystems
  27. # cfsmount.sh
    
  28. Adjust the permissions
  29. # create_perms.sh
    
  30. Create links if expected
  31. # create_links.sh
    
  32. Create the application-packages
  33. # create_package.sh
    

cmclconf.ascii

CLUSTER_NAME  oracle_cfs_cluster

HOSTNAME_ADDRESS_FAMILY  IPV4
# CLUSTER_LOCK_LUN /dev/disk/disk4_p2

# QS_HOST qs_host
# QS_ADDR qs_addr
# QS_POLLING_INTERVAL 120000000
# QS_TIMEOUT_EXTENSION 2000000

#FIRST_CLUSTER_LOCK_VG  

NODE_NAME  node1
  NETWORK_INTERFACE lan1
    HEARTBEAT_IP 192.168.1.141
  NETWORK_INTERFACE lan6
    HEARTBEAT_IP 192.168.2.141
  NETWORK_INTERFACE lan900
    STATIONARY_IP 10.238.87.141
  CLUSTER_LOCK_LUN /dev/disk/disk148

NODE_NAME  node2
  NETWORK_INTERFACE lan1
    HEARTBEAT_IP 192.168.1.41
  NETWORK_INTERFACE lan6
    HEARTBEAT_IP 192.168.2.41
  NETWORK_INTERFACE lan900
    STATIONARY_IP 10.238.87.41
  CLUSTER_LOCK_LUN /dev/disk/disk146


MEMBER_TIMEOUT   14000000

AUTO_START_TIMEOUT 600000000
NETWORK_POLLING_INTERVAL 2000000

# CONFIGURED_IO_TIMEOUT_EXTENSION  0

NETWORK_FAILURE_DETECTION  INOUT

NETWORK_AUTO_FAILBACK  YES

SUBNET 10.238.87.0
  IP_MONITOR OFF

MAX_CONFIGURED_PACKAGES  300

# WEIGHT_NAME
# WEIGHT_DEFAULT

USER_NAME ANY_USER
USER_HOST ANY_SERVICEGUARD_NODE
USER_ROLE MONITOR

# VOLUME_GROUP  /dev/vgdatabase
# VOLUME_GROUP  /dev/vg02

init_disks.sh

vxdisksetup -i disk60
vxdisksetup -i disk25
vxdisksetup -i disk65
vxdisksetup -i disk27
vxdisksetup -i disk66
vxdisksetup -i disk28
vxdisksetup -i disk67
vxdisksetup -i disk68
vxdisksetup -i disk69
vxdisksetup -i disk70
...

create_dgs.sh

vxdg -s init dglog disk60_loc1=disk60
vxdg -g dglog adddisk disk25_loc2=disk25

vxdg -s init dgcfs disk65_loc1=disk65 
vxdg -g dgcfs adddisk disk27_loc2=disk27

vxdg -s init dgoracle disk66_loc1=disk66
vxdg -g dgoracle adddisk disk28_loc2=disk28

vxdg -s init dgbackup disk67_loc1=disk67
vxdg -g dgbackup adddisk disk68_loc1=disk68
vxdg -g dgbackup adddisk disk69_loc1=disk69
vxdg -g dgbackup adddisk disk70_loc1=disk70
vxdg -g dgbackup adddisk disk71_loc1=disk71
vxdg -g dgbackup adddisk disk72_loc1=disk72
vxdg -g dgbackup adddisk disk29_loc2=disk29
...

cfsdgadm.sh

cfsdgadm add dglog all=sw
cfsdgadm add dgcfs all=sw
cfsdgadm add dgoracle all=sw
cfsdgadm add dgbackup all=sw
cfsdgadm add dgoradata all=sw
cfsdgadm add dgoradata0 all=sw
cfsdgadm add dgoradata1 all=sw
cfsdgadm add dgoradata2 all=sw
cfsdgadm add dgoradata3 all=sw
cfsdgadm add dgoradata4 all=sw

cfsdgactivate.sh

cfsdgadm activate dglog
cfsdgadm activate dgcfs
cfsdgadm activate dgoracle
cfsdgadm activate dgbackup
cfsdgadm activate dgoradata
cfsdgadm activate dgoradata0
cfsdgadm activate dgoradata1
cfsdgadm activate dgoradata2
cfsdgadm activate dgoradata3
cfsdgadm activate dgoradata4

create_vols.sh

vxassist -g dglog make vollog 56g layout=mirror,nolog mirror=enclr
vxassist -g dglog make volscratch 4g layout=mirror,nolog mirror=enclr
vxassist -g dgcfs make volprod 4g layout=mirror,nolog mirror=enclr
vxassist -g dgcfs make volhome 4g layout=mirror,nolog mirror=enclr
vxassist -g dgcfs make voldata 4g layout=mirror,nolog mirror=enclr
vxassist -g dgoracle make volora 20g layout=mirror,nolog mirror=enclr
vxassist -g dgbackup make volbackup 600g layout=mirror,nolog mirror=enclr
vxassist -g dgoradata make voloradata 400g layout=mirror,nolog mirror=enclr
vxassist -g dgoradata0 make voloradata0 180g layout=mirror,nolog mirror=enclr
vxassist -g dgoradata1 make voloradata1 140g layout=mirror,nolog mirror=enclr
vxassist -g dgoradata2 make voloradata2 200g layout=mirror,nolog mirror=enclr
vxassist -g dgoradata3 make voloradata3 240g layout=mirror,nolog mirror=enclr
vxassist -g dgoradata4 make voloradata4 60g layout=mirror,nolog mirror=enclr

create_fs.sh

newfs -F vxfs -o largefiles -b 8192 /dev/vx/rdsk/dgbackup/volbackup
newfs -F vxfs -o largefiles -b 8192 /dev/vx/rdsk/dgcfs/voldata
newfs -F vxfs -o largefiles -b 8192 /dev/vx/rdsk/dgcfs/volhome
newfs -F vxfs -o largefiles -b 8192 /dev/vx/rdsk/dgcfs/volprod
newfs -F vxfs -o largefiles -b 8192 /dev/vx/rdsk/dglog/vollog
newfs -F vxfs -o largefiles -b 8192 /dev/vx/rdsk/dglog/volscratch
newfs -F vxfs -o largefiles -b 8192 /dev/vx/rdsk/dgoracle/volora
newfs -F vxfs -o largefiles -b 8192 /dev/vx/rdsk/dgoradata/voloradata
newfs -F vxfs -o largefiles -b 8192 /dev/vx/rdsk/dgoradata0/voloradata0
newfs -F vxfs -o largefiles -b 8192 /dev/vx/rdsk/dgoradata1/voloradata1
newfs -F vxfs -o largefiles -b 8192 /dev/vx/rdsk/dgoradata2/voloradata2
newfs -F vxfs -o largefiles -b 8192 /dev/vx/rdsk/dgoradata3/voloradata3
newfs -F vxfs -o largefiles -b 8192 /dev/vx/rdsk/dgoradata4/voloradata4

create_mp.sh

cfsmntadm add dgbackup volbackup /cfs/appl/backup all=rw
cfsmntadm add dgcfs voldata /cfs/data/data2 all=rw
cfsmntadm add dgcfs volhome /cfs/apphome all=rw
cfsmntadm add dgcfs volprod /cfs/prod all=rw
cfsmntadm add dglog vollog /cfs/log all=rw
cfsmntadm add dglog volscratch /cfs/scratch all=rw
cfsmntadm add dgoracle volora /cfs/appl/oracle all=rw
cfsmntadm add dgoradata voloradata /cfs/oradata/expimp all=rw
cfsmntadm add dgoradata0 voloradata0 /cfs/oradata/part00 all=rw
cfsmntadm add dgoradata1 voloradata1 /cfs/oradata/part01 all=rw
cfsmntadm add dgoradata2 voloradata2 /cfs/oradata/part02 all=rw
cfsmntadm add dgoradata3 voloradata3 /cfs/oradata/part03 all=rw
cfsmntadm add dgoradata4 voloradata4 /cfs/oradata/part04 all=rw

cfsmount.sh

cfsmount /cfs/appl/backup
cfsmount /cfs/data/data2
cfsmount /cfs/apphome
cfsmount /cfs/prod
cfsmount /cfs/log
cfsmount /cfs/scratch
cfsmount /cfs/appl/oracle
cfsmount /cfs/oradata/expimp
cfsmount /cfs/oradata/part00
cfsmount /cfs/oradata/part01
cfsmount /cfs/oradata/part02
cfsmount /cfs/oradata/part03
cfsmount /cfs/oradata/part04

create_perms.sh

chown root:root /cfs
chmod -R 755 /cfs

chown oracle:oinstall /cfs/volbackup

chown root:root /cfs/voldata

chown root:root /cfs/volhome
chown root:root /cfs/vollog
chown oracle:oinstall /cfs/volora
chown oracle:oinstall /cfs/voloradata
chown oracle:oinstall /cfs/voloradata0
chown oracle:oinstall /cfs/voloradata1
chown oracle:oinstall /cfs/voloradata2
chown oracle:oinstall /cfs/voloradata3
chown oracle:oinstall /cfs/voloradata4

chown root:root /cfs/volprod
chown root:root /cfs/volscratch

create_links.sh

ln -sf /cfs/log /log

ln -sf /cfs/prod /prod
ln -sf /cfs/apphome /apphome 

ln -sf /cfs/data/data2 /data/data2
ln -sf /cfs/scratch /scratch

Create an application package

  1. Create a package with the given modules
  2. # cmmakepkg -m sg/basic -m sg/dependency -m sg/external -m sg/failover -m sg/monitor_subnet \
      -m sg/package_ip -m sg/pev /etc/cmcluster/oracle/oracle.config
    
  3. Adjust the config-files # vi /etc/cmcluster/oracle/oracle.config
  4. Create a start/stop-script
  5. # cp /etc/cmcluster/examples/*.template /cfs/appl/oracle/cluster/pkg_oracle.sh
    
  6. Adjust the pkg_oracle.sh Functions:
    • validate_command<code> is executed while the <code>cmapplyconf command.
    • start_command is exeduted while the package startup.
    • stop_command is executed while the packege stops.
  7. To enable package environment variables
    • In the package conifg file use
    pev_<name>
    
    • In the start/stop-script use
    $PEV_<NAME>
    
  8. Activate the package
  9. # cmapplyconf -P /etc/cmcluster/oracle/oracle.config
    
  10. Start the package
  11. # cmmodpkg -e oracle 
    
    or
    # cmrunpkg -v -n <node> oracle
    

oracle.config

package_name   oracle
package_description  "Serviceguard Package for ORACLE"

module_name   sg/basic
module_version   1
module_name   sg/priority
module_version   1
module_name   sg/dependency
module_version   1
module_name   sg/external
module_version   1
module_name   sg/failover
module_version   1
module_name   sg/monitor_subnet
module_version   1
module_name   sg/package_ip
module_version   1
module_name   sg/pev
module_version   1

package_type   failover

node_name   node1
node_name   node2

auto_run   yes
node_fail_fast_enabled  no

run_script_timeout  600
halt_script_timeout  600

successor_halt_timeout  no_timeout

script_log_file   $SGCONF/oracle/oracle.log

operation_sequence  $SGCONF/scripts/sg/package_ip.sh
operation_sequence  $SGCONF/scripts/sg/external.sh

#log_level  

failover_policy   configured_node
failback_policy   manual

priority   no_priority

dependency_name   MP7
dependency_condition  SG-CFS-MP-7 = up 
dependency_location  same_node  

dependency_name   MP8
dependency_condition  SG-CFS-MP-8 = up 
dependency_location  same_node  

dependency_name   MP9
dependency_condition  SG-CFS-MP-9 = up 
dependency_location  same_node  

dependency_name   MP10
dependency_condition  SG-CFS-MP-10 = up 
dependency_location  same_node  

dependency_name   MP11
dependency_condition  SG-CFS-MP-11 = up 
dependency_location  same_node  

dependency_name   MP12
dependency_condition  SG-CFS-MP-12 = up 
dependency_location  same_node  

dependency_name   MP13
dependency_condition  SG-CFS-MP-13 = up 
dependency_location  same_node  

external_script   /cfs/appl/oracle/cluster/pkg_oracle.sh

local_lan_failover_allowed yes

monitored_subnet 10.238.87.0  

#cluster_interconnect_subnet   

ip_subnet  10.238.87.0 
ip_address  10.238.87.143 

#pev_
SHARE

sangeethakumar

  • Image
  • Image
  • Image
  • Image
  • Image
    Blogger Comment
    Facebook Comment

0 comments:

Post a Comment