lobion.blogg.se

Failed root disk using veritas volume manager
Failed root disk using veritas volume manager







  1. #Failed root disk using veritas volume manager manual
  2. #Failed root disk using veritas volume manager Offline

# action $"Setting up Logical Volume Management:" /sbin/lvm vgchange -a n v11 -sysinit On both hosts offcourse in the /etc/rc.d/rc.sysinit ~]# vi /etc/rc.d/rc.sysinitĪction $"Setting up Logical Volume Management:" /sbin/lvm vgchange -a ay -sysinit

  • Then I make sure that all LVM volumes (specially the root disks on volume group v1) are activated at boot.
  • Starting LVM metadata daemon: # chkconfig lvm2-lvmetad ~]# service lvm2-lvmetad status Use_lvmetad = ~]# service lvm2-lvmetad start Use_lvmetad = ~]# grep "use_lvmetad = " /etc/lvm/lvm.conf # before changing use_lvmetad to 1 and started again afterwards. # If lvmetad has been running while use_lvmetad was 0, it MUST be stopped
  • So first I enabled lvmetad on both hosts in the /etc/lvm/lvm.conf file and enable the service vi /etc/lvm/lvm.conf.
  • I found out that auto_activation_volule_list in /etc/lvm/lvm.conf depends on the service lvmetad. # auto_activation_volume_list = Īuto_activation_volume_list = # matches if any tag defined on the host is also set in the LV or VG # "vgname" and "vgname/lvname" are matched exactly. # option (-activate ay/-a ay), and if it matches, it is activated. # activated is checked against the list while using the autoactivation # If auto_activation_volume_list is defined, each LV that is to be I also tried editing /etc/lvm/lvm.conf file to only auto activate the LVM volumes used for the OS, thus not auto activating the LVM volumes used in the Service Groups: # action $"Setting up Logical Volume Management:" /sbin/lvm vgchange -a ay -sysinitĪction $"Setting up Logical Volume Management:" /sbin/lvm vgchange -a y v1 -sysinitĪction $"Setting up Logical Volume Management:" /sbin/lvm vgchange -a n v11 -sysinitĪction $"Setting up Logical Volume Management:" /sbin/lvm vgchange -a n v2 -sysinitĪction $"Setting up Logical Volume Management:" /sbin/lvm vgchange -a n v3 -sysinitĪction $"Setting up Logical Volume Management:" /sbin/lvm vgchange -a n v4 -sysinitĪction $"Setting up Logical Volume Management:" /sbin/lvm vgchange -a n v5 -sysinitĪction $"Setting up Logical Volume Management:" /sbin/lvm vgchange -a n v6 -sysinitĪction $"Setting up Logical Volume Management:" /sbin/lvm vgchange -a n v8 -sysinitĪction $"Setting up Logical Volume Management:" /sbin/lvm vgexport -a -sysinit I tried editing the /etc/rc.d/rc.sysinit file to deactivate my Service Group LVM volumes and an export of all LVM volumegroups with inactive volumes. These are the options that I already tried, without success: Or deactivate it and export the volume group before starting VCS.ĭoes Symantec has more information, procedure.

    #Failed root disk using veritas volume manager Offline

    Workaround: You must bring the VCS LVMVolumeGroup resource offline manually, This causes the service group to be in a partial state. LVMVolumeGroup remains offline though the LVMLogicalVolume resource comes If you import and activate LVM volume group before starting VCS, the SG goes into Partial state if Native LVMVG is imported and activated outside VCS control See page 38 in " Veritas Cluster Server 6.0.1 Release Notes - Linux"Īnd see page 30 in " Veritas Cluster Server 6.0.4 Release Notes - Linux"Īnd see page 47 in " Veritas Cluster Server 6.1 Release Notes - Linux" The workaround is rather vague and I tried already several options. This operational issue is know to Symantec and described in various recent release notes. That results in expected behavior: no resource online and SG's offline. Starting the cluster again (so no host reboot).

    #Failed root disk using veritas volume manager manual

    This was proven by stopping the cluster on all nodes, deactivating all LVM volumes manual via the vgchange command, It appears that LVM activates all volumes at boot time, just before the Cluster kicks in. And even when priority for the second SG is set to the second node. The funny thing is that all LVM volumes (for both SG's) are online on the first node.Įven when the LastOnline was the second node. VCS is not trying to start any resource, which is correct since AutoStart is disabled.īut the cluster sees all LVM Volumes as online, not the LVM Volume Groups. Every resource in my SG's is offline, execpt for my LVM Volumes. When I reboot both nodes, and the cluster starts, my Service Groups are in Partial state.Įven though the AutoStart attribute is set to false for both SG's.

    failed root disk using veritas volume manager failed root disk using veritas volume manager

    And a second failover Service Group on the second node. The Cluster has 1 failover Service Group on the first node.

    failed root disk using veritas volume manager

    Now I finally managed to get it working properly, I still have one issue. Since Symantec does not support Device Mapper multipathing for the combination LVM+VCS on Red Hat, My volumes are under LVM (customer standard). I have a two-node Veritas Cluster on Red Hat.

  • Multipathing: Veritas Dynamic Multi Pathing.
  • OS: Red Hat Enterprise Linux Server release 6.4 (Santiago).
  • Cluster: Veritas Cluster Server 6.1.0.000.








  • Failed root disk using veritas volume manager