Sunday, 24 November 2013

VxVM interview questions

1)    Can you reduce a FS in VxVM ? What is the risk involved ?
There are a few risks and issues involved while reducing a Filesystem in VxVM.
1.1)    UFS volumes  cannot be shrunk, only grow operation is permitted.
1.2)    VxFS volumes  must be mounted for grow and shrink operation, if it is unmounted no operation can be performed
1.3)    You cannot resize a volume that contains plexes with different layout types ( e.g concat and stripe). Attempting to do so results in the following error message:
              VxVM vxresize ERROR V-5-1-2536 Volume volume has different
               organization in each mirror
 

To resize such a volume successfully, you must first reconfigure it so that each data plex has the same layout type.

2)    What is the role of vxconfigd daemon ?
vxconfigd daemon handles all configuration management tasks for VxVM objects. It maintains disk and disk group configuration details, communicates configuration changes to the kernel, and modifies the persistent configuration information stored on disks
 vxconfigd provides the interface between VxVM commands and the kernel device drivers.
 vxconfigd handles   configuration change requests from VxVM utilities, communicates the change requests  to the VxVM kernel, and modifies configuration information stored on disk.
 vxconfigd also initializes VxVM when the system is booted.
 

So practically any command which changes the configuration of VxVM objects (plexes, volumes) interact with vxconfigd daemon.

3)    Where is the diskgroup configuration information is stored in VxVM ?
The DG configuration info is stored in /etc/vx/cbr/bk directory. Last 5 configuration changes are stored.


4)    What is stored in privlen of a VxVM disk ? Do all the disks of a diskgroup have the same information in their private region ?
The private region of a vxvm disk stores disk header label and configuration information about vxvm objects such as volumes,plexes and sub disks.
 

Yes, each  disk in a diskgroup stores an entire copy of the configuration information.

5)    Can you import a DG with incomplete set of disks ?
Yes, If some of the disks in a Diskgroup have failed , you can import the DG using –f option
# vxdg –f import diskgroup


6) How will you replace a faulty disk in VxVM ?
A disk which has failed can be seen from the output of vxdisk list command. The disk device ctd will be missing from the output and it can be seen as failed was :c#t#d#. This confirms that the disk has indeed failed.
We can use vxdiskadm menu options to replace the faulted disk. Choose option 5 from the men , then list the failed disk using 'list' option. We would have to choose an alternate disk for the replacement. The list of available disks is also shown by vxdiskadm. Press 'y' and complete the replacement activity.

Tuesday, 19 November 2013

Solaris Interview Questions

1) Can you reduce a FS in VxVM ? What is the risk involved ?
2) What are branded zones in solaris ?
3) What is the main enhancement in NFS v4 over v3 ?
4) How can we refresh automount config without affecting connected users ?
5) How many types of automount maps are there ?
6) How can you patch a solaris zone which is a sparse root ?

7) What is the difference between zone and container in Solaris ?
8) How can we assign zpool to NGZ ?
9) How thin provisioning / sparse volume is created in ZFS ?


10) Suppose after a power outage or a mistake by storage admin, one of your LUN in VxVM was unavailable and now it has become available again. But your mountpoint is throwing I/O error. How will you resolve this situation ?
11) What are the advantages of using ZFS instead of UFS ?
12) Which is better hardware  RAID or software RAID ? why ?
13) After a power outage or kernel panic ZFS pool is not getting imported ? What are the steps for troubleshooting this ?
14) Which virtualization solution would you recommend to customer LDOM or zones ? under what scenario and considerations ?
15) Suppose you have a low memory condition on solaris server . How will you troubleshoot further ?
16) What is the change in regards to network configuration in Solaris 10 over Solaris 8 ?

17) How can you restore LDOM configuration ?
18) How can we clone LDOM ?
19) Which field of iostat indicates a IO bottleneck ? What are the important metrics to be looked at in iostat output ?
20) In maintaining Solaris security what are the common steps / procedures followed by System Admins ?
21) In Solaris we have resource pools ( rpool ). We can assign rpools to zones as well to set cpu shares. Then why do we need resource controls in zones ( add rctl ) . What is the advantage of using resource controls when resource pooling is already there ?
22) How to install multiple patches in a single patchadd invocation ?   

Monday, 18 November 2013

VCS VXVM DiskGroup switchover example



Step 1:- Start the desired haagent ( DiskGroup) on both the machines.

client1: /tmp/vcs> haagent -start DiskGroup -sys client1
client1: /tmp/vcs> haagent -start DiskGroup -sys client2

Step 2:- Add the servicegroup and populate the SystemList.

client1: /tmp> haconf -makerw
client1: /tmp> hagrp -add DG_servicegroup
VCS NOTICE V-16-1-10136 Group added; populating SystemList and setting the Parallel attribute recommended before adding resources
client1: /tmp> hagrp -modify VG_servicegroup SystemList client1 1 client2 2

Step 3:- Add the resource, specify the type of the resource and assign it to a servicegroup, enable the resource, specify the Disk Group name etc and bring the resource online.

client1: /tmp> hares -add VXVMDG DiskGroup DG_servicegroup
VCS NOTICE V-16-1-10242 Resource added. Enabled attribute must be set before agent monitors
client1: /tmp> hares -modify VXVMDG Enabled 1
client1: /tmp> hares -modify VXVMDG DiskGroup clusterdg
client1: /tmp> hares -modify VXVMDG StartVolumes 1
client1: /tmp> hares -online VXVMDG -sys client1

Step 4:- Add a ‘Mount’ resource type named clustervol ( or whichever name it is easy to identify with ), specify the correct blockdevice name, FS type, mountpoint and fsck option. Enable and bring the resource online.

client1: /tmp> hares -add clustervol Mount DG_servicegroup
VCS NOTICE V-16-1-10242 Resource added. Enabled attribute must be set before agent monitors
client1: /tmp> hares -modify clustervol BlockDevice /dev/vx/dsk/clusterdg/clustervol
client1: /tmp> hares -modify clustervol FSType vxfs
client1: /tmp> hares -modify clustervol MountPoint /clustervol
client1: /tmp> hares -modify clustervol FsckOpt %-y
client1: /tmp> hares -modify clustervol Enabled 1
client1: /tmp> hares -online clustervol -sys client1

Step 5 :- Link the two resources i.e “DiskGroup and Mount” and enable resources contained with the servicegroup.

client1: /tmp> hares -link clustervol VXVMDG
client1: /tmp> hagrp -enableresources DG_servicegroup
client1: /tmp> hagrp -online DG_servicegroup -sys client1
client1: /tmp> haconf -dump –makero
client2: /tmp> mkdir /clustervol
client1: /tmp> hagrp -switch DG_servicegroup -to client2

Friday, 15 November 2013

Extended process list in Solaris 10 and HPUX, without truncated lines



Some processes typically have a very long list (more than 1000 characters) with command line arguements  and ps –ef | grep <pid> will show only one line (max 80 characters). Both solaris and hpux have extensions through which we can view the full command line, without truncation.


To retrieve the full process list for a process by running ps

on Solaris :-
/usr/ucb/ps –agxuwwwww PID

On HPUX :-
ps –exx

Tuesday, 12 November 2013

Installing HP-UX clients from Golden Image

      


A Golden Image is a compressed archive of a current system. It contains all the software and hardware configurations on the existing system. This can be deployed to clients on the network who have similar hardware configurations.

Steps to clone a client using Golden Image.

 1. Edit the .rhosts file of the Ignite-UX server as well as client so that we can directly store the archive onto the server and reboot the client from the server directly for installation later on.

[ignite-server]# cat /.rhosts
10.237.93.112               root


2. Use the following directory for storing archives

[ignite-server]#pwd
/var/opt/ignite/archives


3. Add the following directories to the list of NFS-exported directories

[ignite-server]# vi /etc/exports

/var/opt/ignite/clients -anon=2
/var/opt/ignite/archives –anon=2
 
 
[ignite-server]#exportfs –av
 
 
 
4. From the client machine we will be running make_sys_image script so that the archive is stored on Ignite server at /var/opt/ignite/archives
 
[client]# pwd
/opt/ignite/data/scripts
[client]#./make_sys_image –s 10.237.93.115 –d /var/opt/ignite/archives 
 
    - where 10.237.93.115 is IP of Ignite server.
 
 
5. Once the archive creation is complete, from the Ignite-UX server we can edit our configuration file to reflect the changes we want to make.
We will copy the example file provided by HP to the directory where we will store our other configuration files.
 
[ignite-server]# cd /opt/ignite/data/examples
[ignite-server]# cp core11.cfg /var/opt/ignite/data/Rel_B.11.31/archive11.cfg
 
 
6. While creating Golden Image we need to manully calculate the archive_impact and the output will help us in deciding the size of mountpoints.
 
[ignite-server]#pwd
/var/opt/ignite/archives
[ignite-server]#ls
ggntest1.gz
[ignite-server]#/opt/ignite/lbin/archive_impact –t –g /var/opt/ignite/archives/ggntest1.gz > /tmp/GOLDEN.impacts
 
 
 
7. We can use the impact statements generated to edit our config files. The  file archive11.cfg is very critical and edit it very carefully.
 
[ignite-server]# vi /var/opt/ignite/data/Rel_B.11.31/archive11.cfg
 
........
 
 things to note here in this file are
 
 nfs_source = “10.237.93.115:/var/opt/ignite/archives”
 archive_path = “ggntest1.gz”
 
  - the archive_path is relative to the nfs_source specified earlier in the file. This file will be transferrd via nfs to the client. Also check the permissions on the archive file (755) otherwise ignite will throw a lot of errors.
 
post_load_script = "/opt/ignite/data/scripts/os_arch_post_l"
post_config_script = "/opt/ignite/data/scripts/os_arch_post_c"
 
These two scripts are also run to ensure that installation is complete.
 
All the impacts have to be filled up using the file generated previously.
 
impacts=”/” 2048000Kb
impacts=”tmp” 6144000Kb
.
.
 
 
 
8. Use the save_config command to create a configuration file for disks and hardware configuration (e.g hardware paths ). 
 
[client]# save_config -f /tmp/save_config.out vg00
 
Copy this over to the ignite server.
 
[client]#rcp /tmp/save_config.out 10.237.93.115:/var/opt/ignite/data/Rel_B.11.31/archive_disk.cfg
 
The file archive_disk.cfg contains hardware paths and entire volume group related information. Edit this file to reflect all the required customizations. If the client being installed is not of similar hardware configuration , then installation might fail.
 
 
 
9. In the INDEX file we need to create an entry for Golden Image so that it shows up at client config window and we can point to it while booting the client. We need to add the path to our custom configuration files so that these files are read while booting clients.
 
 Add the following lines below the default entries.
 
[ignite-server]#vi /var/opt/ignite/INDEX
.
.
cfg "Golden Image" {
 
        description "HP-UX B.11.31 Golden Image"
 
        "/var/opt/ignite/data/Rel_B.11.31/archive11.cfg"
 
        "/var/opt/ignite/data/Rel_B.11.31/archive_disk.cfg"
 
        "/var/opt/ignite/config.local"
 
}
 
 
 
 
 
10. Check the files for syntax errors as follows
 
[ignite-server]# instl_adm –T
       * Checking file: /opt/ignite/data/Rel_B.11.11/config
       * Checking file: /opt/ignite/data/Rel_B.11.11/hw_patches_cfg
       * Checking file: /var/opt/ignite/config.local
       .
       * Checking file: /opt/ignite/data/Rel_B.11.31/config
       * Checking file: /opt/ignite/data/Rel_B.11.31/hw_patches_cfg
       * Checking file: /var/opt/ignite/data/Rel_B.11.31/archive11.cfg
       * Checking file: /var/opt/ignite/data/Rel_B.11.31/archive_disk.cfg
 
[ignite-server]#manage_index –l
HP-UX B.11.11 Default
HP-UX B.11.23 Default
HP-UX B.11.31 Default
Golden Image
 
As our Golden Image is showing up in the index, we can boot the client and point it to install from the configuration.
 
11. Boot the client from the ignite server by using the following command.
 
[ignite-server]#bootsys –i “Golden Image” –f client
 

  
  
  
  
  
  
  
  
  
  
  
 
 
 

 
 
 
 
 
 
 
 


Procedure for installing Ignite-UX clients over the network




1. For setting up the ignite server, the Ignite-UX server should have the following entry

    in /etc/exports file.

     /var/opt/ignite/clients –anon=2

2.  Check the config files with this command.

      instl_adm –T

    The config files should be world readable. If they are not instl_adm –T will notify the

      error.

     The config files required are as follows:

      /opt/ignite/data/Rel-B.11.11/config

      /opt/ignite/data/Rel-B.11.11/hw_patches_cfg

      /var/opt/ignite/config.local


 3.  The /etc/inetd.conf file should have the following settings for tftp and instl_boots.


      tftp        dgram  udp wait   root /usr/lbin/tftpd    tftpd\
        /opt/ignite\
        /var/opt/ignite

       à  The tftp service should have access to /opt/ignite and /var/opt/ignite so that it can

              tranfer files using tftp during installation.

    
      instl_boots dgram udp wait root /opt/ignite/lbin/instl_bootd instl_bootd


  4.    The /etc/opt/ignite/instl_boottab file should have the following entry

         corresponding to each host that we want to boot using Ignite-UX server.

           <IP-address>:<Mac-address(with leading 0x)>::


   The last field should be left blank as it is automatically updated by Ignite-UX server

   when it receives a request for installation from the client corresponding with the MAC

   - address mentioned in the file.




Note -  The tftpd and instl_bootd deamon are started by the Ignite-UX server when it

              receives a request for installation. Do not try to start these deamons manually.


5.  After this reboot the client and interrupt the boot process to stop it at BCH> prompt.

    From the BCH> prompt type sea lan install to search for the ignite server.

     The o/p will be similar to the following.



Main Menu: Enter command or menu > sea lan install

 Searching for potential boot device(s) - on Path 0/1/2/0
     This may take several minutes.

To discontinue search, press any key (termination may not be immediate).


                                                                                                                 IODC
   Path#  Device Path (dec)  Device Path (mnem)         Device Type        Rev
   -----  -----------------  ------------------  -----------               ----
   P0            0/1/2/0                      lan.10.237.93.115      LAN Module         4          


  

   This means that the server is giving a valid offer for installation.


6.   Next type the following command to boot from the server.

     BCH> boot lan.10.237.93.115 install

   After this the installation procedure is similar to normal procedure for HP-UX

    installation . Carefully select the recovery archive from which OS needs to be installed

   if there are more than one image.



      

Procedure for creating Ignite-UX recovery archive over the network




1.  Determine the archive server and archive path where you want to store your Ignite recovery archive. This server can be different from the Ignite-UX server used for booting clients over the network or it can be the same server as well.
                  The archive path should be NFS exported before executing the make_net_recovery command because it is NFS mounted on the Ignite-UX client before archive creation. The archive is then tranferred via tar to the NFS mounted directory of archive server.

 For example, if the archive path on archive server is /u01/my_archives/<Hostname> then the /etc/exports file on archive server should have the following entry.

/u01/my_archives/<Hostname> -anon=65534,async,root=<Hostname>

  where <Hostname> denotes the hostname of the ignite client.


2.  After editing /etc/exports file , run exportfs –av .

3.  Run the following command on archive server.

     chown bin:bin /u01/my_archives/<Hostname>

4.  To create the network recovery archive, run the following command on  Ignite-UX client.

make_net_recovery –Av –a <archive-server>:<archive-path> -s <Ignite-UX_server>

where  A = for including all the files from the PV/disk that contains  root Volume Group.

            v = for verbose mode.

            a = for specifying the archive server.

            s = for specifying the Ignite-UX server.


5.  The log file of archive creation can be found at

/var/opt/ignite/clients/<Hostname>/recovery/<Date,Time>/recovery.log