Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Veritas Cluster Server VCS Cheat Sheet, Cheat Sheet of Computer Networks

VCS is a software to help running the applications like database and network file sharing

Typology: Cheat Sheet

2020/2021

Uploaded on 04/23/2021

ammla
ammla 🇺🇸

4.5

(37)

275 documents

1 / 6

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
Veritas Cluster Cheat sheet
VCS uses two components, LLT and GAB to share data over the private networks among systems.
These components provide the performance and reliability required by VCS.
LLT
LLT (Low Latency Transport) provides fast, kernel-to-kernel comms and monitors network
connections. The system admin configures the LLT by creating a configuration file (llttab)
that describes the systems in the cluster and private network links among them. The LLT runs
in layer 2 of the network stack
GAB
GAB (Group membership and Atomic Broadcast) provides the global message order required to
maintain a synchronised state among the systems, and monitors disk comms such as that required
by the VCS heartbeat utility. The system admin configures GAB driver by creating a
configuration file ( gabtab).
LLT and GAB files
/etc/llthosts The file is a database, containing one entry per system, that links the LLT
system ID with the hosts name. The file is identical on each server in the
cluster.
/etc/llttab The file contains information that is derived during installation and is
used by the utility lltconfig.
/etc/gabtab The file contains the information needed to configure the GAB driver. This
file is used by the gabconfig utility.
/etc/VRTSvcs/conf/config/main.cf The VCS configuration file. The file contains the information that defines
the cluster and its systems.
Gabtab Entries
/sbin/gabdiskconf - i /dev/dsk/c1t2d0s2 -s 16 -S 1123
/sbin/gabdiskconf - i /dev/dsk/c1t2d0s2 -s 144 -S 1124
/sbin/gabdiskhb -a /dev/dsk/c1t2d0s2 -s 16 -p a -s 1123
/sbin/gabdiskhb -a /dev/dsk/c1t2d0s2 -s 144 -p h -s 1124
/sbin/gabconfig -c -n2
gabdiskconf
-i Initialises the disk region
-s Start Block
-S Signature
gabdiskhb (heartbeat disks)
-a Add a gab disk heartbeat resource
-s Start Block
-p Port
-S Signature
gabconfig -c Configure the driver for use
-n Number of systems in the cluster.
LLT and GAB Commands
Verifying that links are active for LLT lltstat -n
verbose output of the lltstat command lltstat -nvv | more
open ports for LLT lltstat -p
display the values of LLT configuration
directives lltstat -c
lists information about each configured LLT
link lltstat -l
List all MAC addresses in the cluster lltconfig -a list
stop the LLT running lltconfig -U
start the LLT lltconfig -c
pf3
pf4
pf5

Partial preview of the text

Download Veritas Cluster Server VCS Cheat Sheet and more Cheat Sheet Computer Networks in PDF only on Docsity!

Veritas Cluster Cheat sheet

VCS uses two components, LLT and GAB to share data over the private networks among systems.

These components provide the performance and reliability required by VCS.

LLT

LLT (Low Latency Transport) provides fast, kernel-to-kernel comms and monitors network connections. The system admin configures the LLT by creating a configuration file (llttab) that describes the systems in the cluster and private network links among them. The LLT runs in layer 2 of the network stack GAB GAB (Group membership and Atomic Broadcast) provides the global message order required to maintain a synchronised state among the systems, and monitors disk comms such as that required by the VCS heartbeat utility. The system admin configures GAB driver by creating a configuration file ( gabtab).

LLT and GAB files

/etc/llthosts The file is^ a^ database,^ containing^ one^ entry^ per^ system,^ that^ links^ the^ LLT system ID with the hosts name. The file is identical on each server in the cluster. /etc/llttab The file contains^ information^ that^ is^ derived^ during^ installation^ and^ is used by the utility lltconfig. /etc/gabtab The file contains^ the^ information^ needed^ to^ configure^ the^ GAB^ driver.^ This file is used by the gabconfig utility. /etc/VRTSvcs/conf/config/main.cf The VCS configuration^ file.^ The^ file^ contains^ the^ information^ that^ defines the cluster and its systems.

Gabtab Entries

/sbin/gabdiskconf - i /dev/dsk/c1t2d0s2 -s 16 -S 1123 /sbin/gabdiskconf - i /dev/dsk/c1t2d0s2 -s 144 -S 1124 /sbin/gabdiskhb -a /dev/dsk/c1t2d0s2 -s 16 -p a -s 1123 /sbin/gabdiskhb -a /dev/dsk/c1t2d0s2 -s 144 -p h -s 1124 /sbin/gabconfig -c -n gabdiskconf -i Initialises the disk region -s Start Block -S Signature gabdiskhb (heartbeat disks) -a Add a gab disk heartbeat resource -s Start Block -p Port -S Signature gabconfig -c Configure the driver for use -n Number of systems in the cluster.

LLT and GAB Commands

Verifying that links are active for LLT lltstat^ -n verbose output of the lltstat command lltstat^ -nvv^ |^ more open ports for LLT lltstat^ -p display the values of LLT configuration directives lltstat -c lists information about each configured LLT link lltstat -l List all MAC addresses in the cluster lltconfig^ -a^ list stop the LLT running lltconfig^ -U start the LLT lltconfig^ -c

verify that GAB is operating gabconfig -a Note: port a indicates that GAB is communicating, port h indicates that VCS is started stop GAB running gabconfig^ -U start the GAB gabconfig^ -c^ -n^ <number^ of^ nodes> override the seed values in the gabtab file gabconfig^ -c^ -x

GAB Port Memberbership

List Membership gabconfig^ -a Unregister port f /opt/VRTS/bin/fsclustadm^ cfsdeinit Port Function a gab driver b I/O fencing (designed to guarantee data integrity) d ODM (Oracle Disk Manager) f CFS (Cluster File System) h VCS (VERITAS Cluster Server: high availability daemon) o VCSMM driver (kernel module needed for Oracle and VCS interface) q QuickLog daemon v CVM (Cluster Volume Manager) w vxconfigd (module for cvm)

Cluster daemons

High Availability Daemon had Companion Daemon hashadow Resource Agent daemon Agent Web Console cluster managerment daemon CmdServer

Cluster Log Files

Log Directory /var/VRTSvcs/log primary log file (engine log file) /var/VRTSvcs/log/engine_A.log

Starting and Stopping the cluster

"-stale" instructs the engine to treat the local config as stale "-force" instructs the engine to treat a stale config as a valid one hastart [-stale|-force] Bring the cluster into running mode from a stale state using the configuration file from a particular server hasys -force <server_name> stop the cluster on the local server but leave the application/s running, do not failover the application/s hastop -local stop cluster on local server but evacuate (failover) the application/s to another node within the cluster hastop -local -evacuate stop the cluster on all nodes but leave the application/s running hastop -all -force

Cluster Status

1 = read only mode Check the configuration file hacf -verify /etc/VRTS/conf/config Note: you can point to any directory as long as it has main.cf and types.cf convert a main.cf file into cluster commands hacf -cftocmd /etc/VRTS/conf/config -dest /tmp convert a command file into a main.cf file hacf -cmdtocf /tmp -dest /etc/VRTS/conf/config

Service Groups

add a service group haconf -makerw hagrp -add groupw hagrp -modify groupw SystemList sun1 1 sun2 2 hagrp -autoenable groupw -sys sun haconf -dump -makero delete a service group haconf -makerw hagrp -delete groupw haconf -dump -makero change a service group haconf -makerw hagrp -modify groupw SystemList sun1 1 sun2 2 sun3 3 haconf -dump -makero Note: use the "hagrp -display " to list attributes list the service groups hagrp -list list the groups dependencies hagrp -dep^ list the parameters of a group hagrp -display^ display a service group's resource hagrp -resources^ display the current state of the service group hagrp -state^ clear a faulted non-persistent resource in a specific grp hagrp -clear [-sys] Change the system list in a cluster

remove the host

hagrp -modify grp_zlnrssd SystemList -delete

add the new host (don't forget to state its position)

hagrp -modify grp_zlnrssd SystemList -add 1

update the autostart list

hagrp -modify grp_zlnrssd AutoStartList

Service Group Operations

Start a service group and bring its resources online hagrp -online -sys Stop a service group and takes its resources offline hagrp -offline -sys Switch a service group from system to another hagrp -switch to Enable all the resources in a group hagrp -enableresources^ Disable all the resources in a group hagrp -disableresources^ Freeze a service group (disable onlining and offlining) hagrp -freeze [-persistent] note: use the following to check "hagrp -display | grep TFrozen" Unfreeze a service group (enable onlining and hagrp -unfreeze^ ^ [-persistent]

offlining) note: use the following to check "hagrp -display | grep TFrozen" Enable a service group. Enabled groups can only be brought online haconf -makerw hagrp -enable [-sys] haconf -dump -makero Note to check run the following command "hagrp -display | grep Enabled" Disable a service group. Stop from bringing online haconf -makerw hagrp -disable [-sys] haconf -dump -makero Note to check run the following command "hagrp -display | grep Enabled" Flush a service group and enable corrective action. hagrp -flush -sys

Resources

add a resource haconf -makerw hares -add appDG DiskGroup groupw hares -modify appDG Enabled 1 hares -modify appDG DiskGroup appdg hares -modify appDG StartVolumes 0 haconf -dump -makero delete a resource haconf -makerw hares -delete haconf -dump -makero change a resource haconf -makerw hares -modify appDG Enabled 1 haconf -dump -makero Note: list parameters "hares -display " change a resource attribute to be globally wide hares -global change a resource attribute to be locally wide hares -local list the parameters of a resource hares -display^ list the resources hares -list list the resource dependencies hares -dep

Resource Operations

Online a resource hares -online^ ^ [-sys] Offline a resource hares -offline^ ^ [-sys] display the state of a resource( offline, online, etc) hares -state display the parameters of a resource hares -display^ Offline a resource and propagate the command to its children hares -offprop -sys Cause a resource agent to immediately monitor the resource hares -probe -sys Clearing a resource (automatically initiates the onlining) hares -clear [-sys]

Resource Types