



Study with the several resources on Docsity
Earn points by helping other students or get them with a premium plan
Prepare for your exams
Study with the several resources on Docsity
Earn points to download
Earn points by helping other students or get them with a premium plan
Community
Ask the community for help and clear up your study doubts
Discover the best universities in your country according to Docsity users
Free resources
Download our free guides on studying techniques, anxiety management strategies, and thesis advice from Docsity tutors
VCS is a software to help running the applications like database and network file sharing
Typology: Cheat Sheet
1 / 6
This page cannot be seen from the preview
Don't miss anything!
LLT (Low Latency Transport) provides fast, kernel-to-kernel comms and monitors network connections. The system admin configures the LLT by creating a configuration file (llttab) that describes the systems in the cluster and private network links among them. The LLT runs in layer 2 of the network stack GAB GAB (Group membership and Atomic Broadcast) provides the global message order required to maintain a synchronised state among the systems, and monitors disk comms such as that required by the VCS heartbeat utility. The system admin configures GAB driver by creating a configuration file ( gabtab).
/etc/llthosts The file is^ a^ database,^ containing^ one^ entry^ per^ system,^ that^ links^ the^ LLT system ID with the hosts name. The file is identical on each server in the cluster. /etc/llttab The file contains^ information^ that^ is^ derived^ during^ installation^ and^ is used by the utility lltconfig. /etc/gabtab The file contains^ the^ information^ needed^ to^ configure^ the^ GAB^ driver.^ This file is used by the gabconfig utility. /etc/VRTSvcs/conf/config/main.cf The VCS configuration^ file.^ The^ file^ contains^ the^ information^ that^ defines the cluster and its systems.
/sbin/gabdiskconf - i /dev/dsk/c1t2d0s2 -s 16 -S 1123 /sbin/gabdiskconf - i /dev/dsk/c1t2d0s2 -s 144 -S 1124 /sbin/gabdiskhb -a /dev/dsk/c1t2d0s2 -s 16 -p a -s 1123 /sbin/gabdiskhb -a /dev/dsk/c1t2d0s2 -s 144 -p h -s 1124 /sbin/gabconfig -c -n gabdiskconf -i Initialises the disk region -s Start Block -S Signature gabdiskhb (heartbeat disks) -a Add a gab disk heartbeat resource -s Start Block -p Port -S Signature gabconfig -c Configure the driver for use -n Number of systems in the cluster.
Verifying that links are active for LLT lltstat^ -n verbose output of the lltstat command lltstat^ -nvv^ |^ more open ports for LLT lltstat^ -p display the values of LLT configuration directives lltstat -c lists information about each configured LLT link lltstat -l List all MAC addresses in the cluster lltconfig^ -a^ list stop the LLT running lltconfig^ -U start the LLT lltconfig^ -c
verify that GAB is operating gabconfig -a Note: port a indicates that GAB is communicating, port h indicates that VCS is started stop GAB running gabconfig^ -U start the GAB gabconfig^ -c^ -n^ <number^ of^ nodes> override the seed values in the gabtab file gabconfig^ -c^ -x
List Membership gabconfig^ -a Unregister port f /opt/VRTS/bin/fsclustadm^ cfsdeinit Port Function a gab driver b I/O fencing (designed to guarantee data integrity) d ODM (Oracle Disk Manager) f CFS (Cluster File System) h VCS (VERITAS Cluster Server: high availability daemon) o VCSMM driver (kernel module needed for Oracle and VCS interface) q QuickLog daemon v CVM (Cluster Volume Manager) w vxconfigd (module for cvm)
High Availability Daemon had Companion Daemon hashadow Resource Agent daemon
Log Directory /var/VRTSvcs/log primary log file (engine log file) /var/VRTSvcs/log/engine_A.log
"-stale" instructs the engine to treat the local config as stale "-force" instructs the engine to treat a stale config as a valid one hastart [-stale|-force] Bring the cluster into running mode from a stale state using the configuration file from a particular server hasys -force <server_name> stop the cluster on the local server but leave the application/s running, do not failover the application/s hastop -local stop cluster on local server but evacuate (failover) the application/s to another node within the cluster hastop -local -evacuate stop the cluster on all nodes but leave the application/s running hastop -all -force
1 = read only mode Check the configuration file hacf -verify /etc/VRTS/conf/config Note: you can point to any directory as long as it has main.cf and types.cf convert a main.cf file into cluster commands hacf -cftocmd /etc/VRTS/conf/config -dest /tmp convert a command file into a main.cf file hacf -cmdtocf /tmp -dest /etc/VRTS/conf/config
add a service group haconf -makerw hagrp -add groupw hagrp -modify groupw SystemList sun1 1 sun2 2 hagrp -autoenable groupw -sys sun haconf -dump -makero delete a service group haconf -makerw hagrp -delete groupw haconf -dump -makero change a service group haconf -makerw hagrp -modify groupw SystemList sun1 1 sun2 2 sun3 3 haconf -dump -makero Note: use the "hagrp -display
hagrp -modify grp_zlnrssd SystemList -delete
hagrp -modify grp_zlnrssd SystemList -add
hagrp -modify grp_zlnrssd AutoStartList
Start a service group and bring its resources online hagrp -online
offlining) note: use the following to check "hagrp -display
add a resource haconf -makerw hares -add appDG DiskGroup groupw hares -modify appDG Enabled 1 hares -modify appDG DiskGroup appdg hares -modify appDG StartVolumes 0 haconf -dump -makero delete a resource haconf -makerw hares -delete
Online a resource hares -online^