Sunday, 24 November 2013

VxVM interview questions

1)    Can you reduce a FS in VxVM ? What is the risk involved ?
There are a few risks and issues involved while reducing a Filesystem in VxVM.
1.1)    UFS volumes  cannot be shrunk, only grow operation is permitted.
1.2)    VxFS volumes  must be mounted for grow and shrink operation, if it is unmounted no operation can be performed
1.3)    You cannot resize a volume that contains plexes with different layout types ( e.g concat and stripe). Attempting to do so results in the following error message:
              VxVM vxresize ERROR V-5-1-2536 Volume volume has different
               organization in each mirror
 

To resize such a volume successfully, you must first reconfigure it so that each data plex has the same layout type.

2)    What is the role of vxconfigd daemon ?
vxconfigd daemon handles all configuration management tasks for VxVM objects. It maintains disk and disk group configuration details, communicates configuration changes to the kernel, and modifies the persistent configuration information stored on disks
 vxconfigd provides the interface between VxVM commands and the kernel device drivers.
 vxconfigd handles   configuration change requests from VxVM utilities, communicates the change requests  to the VxVM kernel, and modifies configuration information stored on disk.
 vxconfigd also initializes VxVM when the system is booted.
 

So practically any command which changes the configuration of VxVM objects (plexes, volumes) interact with vxconfigd daemon.

3)    Where is the diskgroup configuration information is stored in VxVM ?
The DG configuration info is stored in /etc/vx/cbr/bk directory. Last 5 configuration changes are stored.


4)    What is stored in privlen of a VxVM disk ? Do all the disks of a diskgroup have the same information in their private region ?
The private region of a vxvm disk stores disk header label and configuration information about vxvm objects such as volumes,plexes and sub disks.
 

Yes, each  disk in a diskgroup stores an entire copy of the configuration information.

5)    Can you import a DG with incomplete set of disks ?
Yes, If some of the disks in a Diskgroup have failed , you can import the DG using –f option
# vxdg –f import diskgroup


6) How will you replace a faulty disk in VxVM ?
A disk which has failed can be seen from the output of vxdisk list command. The disk device ctd will be missing from the output and it can be seen as failed was :c#t#d#. This confirms that the disk has indeed failed.
We can use vxdiskadm menu options to replace the faulted disk. Choose option 5 from the men , then list the failed disk using 'list' option. We would have to choose an alternate disk for the replacement. The list of available disks is also shown by vxdiskadm. Press 'y' and complete the replacement activity.

Tuesday, 19 November 2013

Solaris Interview Questions

1) Can you reduce a FS in VxVM ? What is the risk involved ?
2) What are branded zones in solaris ?
3) What is the main enhancement in NFS v4 over v3 ?
4) How can we refresh automount config without affecting connected users ?
5) How many types of automount maps are there ?
6) How can you patch a solaris zone which is a sparse root ?

7) What is the difference between zone and container in Solaris ?
8) How can we assign zpool to NGZ ?
9) How thin provisioning / sparse volume is created in ZFS ?


10) Suppose after a power outage or a mistake by storage admin, one of your LUN in VxVM was unavailable and now it has become available again. But your mountpoint is throwing I/O error. How will you resolve this situation ?
11) What are the advantages of using ZFS instead of UFS ?
12) Which is better hardware  RAID or software RAID ? why ?
13) After a power outage or kernel panic ZFS pool is not getting imported ? What are the steps for troubleshooting this ?
14) Which virtualization solution would you recommend to customer LDOM or zones ? under what scenario and considerations ?
15) Suppose you have a low memory condition on solaris server . How will you troubleshoot further ?
16) What is the change in regards to network configuration in Solaris 10 over Solaris 8 ?

17) How can you restore LDOM configuration ?
18) How can we clone LDOM ?
19) Which field of iostat indicates a IO bottleneck ? What are the important metrics to be looked at in iostat output ?
20) In maintaining Solaris security what are the common steps / procedures followed by System Admins ?
21) In Solaris we have resource pools ( rpool ). We can assign rpools to zones as well to set cpu shares. Then why do we need resource controls in zones ( add rctl ) . What is the advantage of using resource controls when resource pooling is already there ?
22) How to install multiple patches in a single patchadd invocation ?   

Monday, 18 November 2013

VCS VXVM DiskGroup switchover example



Step 1:- Start the desired haagent ( DiskGroup) on both the machines.

client1: /tmp/vcs> haagent -start DiskGroup -sys client1
client1: /tmp/vcs> haagent -start DiskGroup -sys client2

Step 2:- Add the servicegroup and populate the SystemList.

client1: /tmp> haconf -makerw
client1: /tmp> hagrp -add DG_servicegroup
VCS NOTICE V-16-1-10136 Group added; populating SystemList and setting the Parallel attribute recommended before adding resources
client1: /tmp> hagrp -modify VG_servicegroup SystemList client1 1 client2 2

Step 3:- Add the resource, specify the type of the resource and assign it to a servicegroup, enable the resource, specify the Disk Group name etc and bring the resource online.

client1: /tmp> hares -add VXVMDG DiskGroup DG_servicegroup
VCS NOTICE V-16-1-10242 Resource added. Enabled attribute must be set before agent monitors
client1: /tmp> hares -modify VXVMDG Enabled 1
client1: /tmp> hares -modify VXVMDG DiskGroup clusterdg
client1: /tmp> hares -modify VXVMDG StartVolumes 1
client1: /tmp> hares -online VXVMDG -sys client1

Step 4:- Add a ‘Mount’ resource type named clustervol ( or whichever name it is easy to identify with ), specify the correct blockdevice name, FS type, mountpoint and fsck option. Enable and bring the resource online.

client1: /tmp> hares -add clustervol Mount DG_servicegroup
VCS NOTICE V-16-1-10242 Resource added. Enabled attribute must be set before agent monitors
client1: /tmp> hares -modify clustervol BlockDevice /dev/vx/dsk/clusterdg/clustervol
client1: /tmp> hares -modify clustervol FSType vxfs
client1: /tmp> hares -modify clustervol MountPoint /clustervol
client1: /tmp> hares -modify clustervol FsckOpt %-y
client1: /tmp> hares -modify clustervol Enabled 1
client1: /tmp> hares -online clustervol -sys client1

Step 5 :- Link the two resources i.e “DiskGroup and Mount” and enable resources contained with the servicegroup.

client1: /tmp> hares -link clustervol VXVMDG
client1: /tmp> hagrp -enableresources DG_servicegroup
client1: /tmp> hagrp -online DG_servicegroup -sys client1
client1: /tmp> haconf -dump –makero
client2: /tmp> mkdir /clustervol
client1: /tmp> hagrp -switch DG_servicegroup -to client2

Friday, 15 November 2013

Extended process list in Solaris 10 and HPUX, without truncated lines



Some processes typically have a very long list (more than 1000 characters) with command line arguements  and ps –ef | grep <pid> will show only one line (max 80 characters). Both solaris and hpux have extensions through which we can view the full command line, without truncation.


To retrieve the full process list for a process by running ps

on Solaris :-
/usr/ucb/ps –agxuwwwww PID

On HPUX :-
ps –exx

Tuesday, 12 November 2013

Installing HP-UX clients from Golden Image

      


A Golden Image is a compressed archive of a current system. It contains all the software and hardware configurations on the existing system. This can be deployed to clients on the network who have similar hardware configurations.

Steps to clone a client using Golden Image.

 1. Edit the .rhosts file of the Ignite-UX server as well as client so that we can directly store the archive onto the server and reboot the client from the server directly for installation later on.

[ignite-server]# cat /.rhosts
10.237.93.112               root


2. Use the following directory for storing archives

[ignite-server]#pwd
/var/opt/ignite/archives


3. Add the following directories to the list of NFS-exported directories

[ignite-server]# vi /etc/exports

/var/opt/ignite/clients -anon=2
/var/opt/ignite/archives –anon=2
 
 
[ignite-server]#exportfs –av
 
 
 
4. From the client machine we will be running make_sys_image script so that the archive is stored on Ignite server at /var/opt/ignite/archives
 
[client]# pwd
/opt/ignite/data/scripts
[client]#./make_sys_image –s 10.237.93.115 –d /var/opt/ignite/archives 
 
    - where 10.237.93.115 is IP of Ignite server.
 
 
5. Once the archive creation is complete, from the Ignite-UX server we can edit our configuration file to reflect the changes we want to make.
We will copy the example file provided by HP to the directory where we will store our other configuration files.
 
[ignite-server]# cd /opt/ignite/data/examples
[ignite-server]# cp core11.cfg /var/opt/ignite/data/Rel_B.11.31/archive11.cfg
 
 
6. While creating Golden Image we need to manully calculate the archive_impact and the output will help us in deciding the size of mountpoints.
 
[ignite-server]#pwd
/var/opt/ignite/archives
[ignite-server]#ls
ggntest1.gz
[ignite-server]#/opt/ignite/lbin/archive_impact –t –g /var/opt/ignite/archives/ggntest1.gz > /tmp/GOLDEN.impacts
 
 
 
7. We can use the impact statements generated to edit our config files. The  file archive11.cfg is very critical and edit it very carefully.
 
[ignite-server]# vi /var/opt/ignite/data/Rel_B.11.31/archive11.cfg
 
........
 
 things to note here in this file are
 
 nfs_source = “10.237.93.115:/var/opt/ignite/archives”
 archive_path = “ggntest1.gz”
 
  - the archive_path is relative to the nfs_source specified earlier in the file. This file will be transferrd via nfs to the client. Also check the permissions on the archive file (755) otherwise ignite will throw a lot of errors.
 
post_load_script = "/opt/ignite/data/scripts/os_arch_post_l"
post_config_script = "/opt/ignite/data/scripts/os_arch_post_c"
 
These two scripts are also run to ensure that installation is complete.
 
All the impacts have to be filled up using the file generated previously.
 
impacts=”/” 2048000Kb
impacts=”tmp” 6144000Kb
.
.
 
 
 
8. Use the save_config command to create a configuration file for disks and hardware configuration (e.g hardware paths ). 
 
[client]# save_config -f /tmp/save_config.out vg00
 
Copy this over to the ignite server.
 
[client]#rcp /tmp/save_config.out 10.237.93.115:/var/opt/ignite/data/Rel_B.11.31/archive_disk.cfg
 
The file archive_disk.cfg contains hardware paths and entire volume group related information. Edit this file to reflect all the required customizations. If the client being installed is not of similar hardware configuration , then installation might fail.
 
 
 
9. In the INDEX file we need to create an entry for Golden Image so that it shows up at client config window and we can point to it while booting the client. We need to add the path to our custom configuration files so that these files are read while booting clients.
 
 Add the following lines below the default entries.
 
[ignite-server]#vi /var/opt/ignite/INDEX
.
.
cfg "Golden Image" {
 
        description "HP-UX B.11.31 Golden Image"
 
        "/var/opt/ignite/data/Rel_B.11.31/archive11.cfg"
 
        "/var/opt/ignite/data/Rel_B.11.31/archive_disk.cfg"
 
        "/var/opt/ignite/config.local"
 
}
 
 
 
 
 
10. Check the files for syntax errors as follows
 
[ignite-server]# instl_adm –T
       * Checking file: /opt/ignite/data/Rel_B.11.11/config
       * Checking file: /opt/ignite/data/Rel_B.11.11/hw_patches_cfg
       * Checking file: /var/opt/ignite/config.local
       .
       * Checking file: /opt/ignite/data/Rel_B.11.31/config
       * Checking file: /opt/ignite/data/Rel_B.11.31/hw_patches_cfg
       * Checking file: /var/opt/ignite/data/Rel_B.11.31/archive11.cfg
       * Checking file: /var/opt/ignite/data/Rel_B.11.31/archive_disk.cfg
 
[ignite-server]#manage_index –l
HP-UX B.11.11 Default
HP-UX B.11.23 Default
HP-UX B.11.31 Default
Golden Image
 
As our Golden Image is showing up in the index, we can boot the client and point it to install from the configuration.
 
11. Boot the client from the ignite server by using the following command.
 
[ignite-server]#bootsys –i “Golden Image” –f client
 

  
  
  
  
  
  
  
  
  
  
  
 
 
 

 
 
 
 
 
 
 
 


Procedure for installing Ignite-UX clients over the network




1. For setting up the ignite server, the Ignite-UX server should have the following entry

    in /etc/exports file.

     /var/opt/ignite/clients –anon=2

2.  Check the config files with this command.

      instl_adm –T

    The config files should be world readable. If they are not instl_adm –T will notify the

      error.

     The config files required are as follows:

      /opt/ignite/data/Rel-B.11.11/config

      /opt/ignite/data/Rel-B.11.11/hw_patches_cfg

      /var/opt/ignite/config.local


 3.  The /etc/inetd.conf file should have the following settings for tftp and instl_boots.


      tftp        dgram  udp wait   root /usr/lbin/tftpd    tftpd\
        /opt/ignite\
        /var/opt/ignite

       à  The tftp service should have access to /opt/ignite and /var/opt/ignite so that it can

              tranfer files using tftp during installation.

    
      instl_boots dgram udp wait root /opt/ignite/lbin/instl_bootd instl_bootd


  4.    The /etc/opt/ignite/instl_boottab file should have the following entry

         corresponding to each host that we want to boot using Ignite-UX server.

           <IP-address>:<Mac-address(with leading 0x)>::


   The last field should be left blank as it is automatically updated by Ignite-UX server

   when it receives a request for installation from the client corresponding with the MAC

   - address mentioned in the file.




Note -  The tftpd and instl_bootd deamon are started by the Ignite-UX server when it

              receives a request for installation. Do not try to start these deamons manually.


5.  After this reboot the client and interrupt the boot process to stop it at BCH> prompt.

    From the BCH> prompt type sea lan install to search for the ignite server.

     The o/p will be similar to the following.



Main Menu: Enter command or menu > sea lan install

 Searching for potential boot device(s) - on Path 0/1/2/0
     This may take several minutes.

To discontinue search, press any key (termination may not be immediate).


                                                                                                                 IODC
   Path#  Device Path (dec)  Device Path (mnem)         Device Type        Rev
   -----  -----------------  ------------------  -----------               ----
   P0            0/1/2/0                      lan.10.237.93.115      LAN Module         4          


  

   This means that the server is giving a valid offer for installation.


6.   Next type the following command to boot from the server.

     BCH> boot lan.10.237.93.115 install

   After this the installation procedure is similar to normal procedure for HP-UX

    installation . Carefully select the recovery archive from which OS needs to be installed

   if there are more than one image.



      

Procedure for creating Ignite-UX recovery archive over the network




1.  Determine the archive server and archive path where you want to store your Ignite recovery archive. This server can be different from the Ignite-UX server used for booting clients over the network or it can be the same server as well.
                  The archive path should be NFS exported before executing the make_net_recovery command because it is NFS mounted on the Ignite-UX client before archive creation. The archive is then tranferred via tar to the NFS mounted directory of archive server.

 For example, if the archive path on archive server is /u01/my_archives/<Hostname> then the /etc/exports file on archive server should have the following entry.

/u01/my_archives/<Hostname> -anon=65534,async,root=<Hostname>

  where <Hostname> denotes the hostname of the ignite client.


2.  After editing /etc/exports file , run exportfs –av .

3.  Run the following command on archive server.

     chown bin:bin /u01/my_archives/<Hostname>

4.  To create the network recovery archive, run the following command on  Ignite-UX client.

make_net_recovery –Av –a <archive-server>:<archive-path> -s <Ignite-UX_server>

where  A = for including all the files from the PV/disk that contains  root Volume Group.

            v = for verbose mode.

            a = for specifying the archive server.

            s = for specifying the Ignite-UX server.


5.  The log file of archive creation can be found at

/var/opt/ignite/clients/<Hostname>/recovery/<Date,Time>/recovery.log










                       

Tuesday, 24 September 2013

Vskills certification on Cloud Computing

Recently completed a certification on cloud computing from Vskills. It is a vendor neutral certification and gives a good overview of all available cloud offerings. The study material starts by explaining the evolution of Cloud computing.right from Mainframes , Distributed and Virtualization eras.

As you might already know, over the last decade virtualization techniques have come of age and commodity hardware has also matured. Customers whose main focus is not IT, feel burdened by IT overhead costs like manpower requirements, Datacenter up keep, power requirements etc. They wish they could offload these tasks and focus on main business. Organisations also typically want quick deployment and QA times and do not want to go through the whole process of requirement gathering from teams, discussing with vendors sales team , placing order , server delivery and installation. They want to use IT as a utility.

Given below are common cloud service models:-
IaaS – Infrastructure as a Service , refers to offerings from cloud vendors providing Compute, Network and Storage. It is customer's responsibility to install OS, Database or Apps, Manage security and patching. Customer has full flexibility in this model.
PaaS – Platform as a Service, where cloud vendor provides  pre-installed OS images along with Database / Apps, Security Tools etc. Customer installs own software and starts using the instance.
SaaS – Software as a Service, where end product softwares are provided by vendor like E-mail / messaging, Sales Dashboard, Blog Hosting, HR Payroll, Training Module etc. Customer has least flexibility in doing customization under this model.

The certification also introduces you to popular cloud offerings which are listed below:-

Commercial Cloud offerings
Amazon AWS – The market leader in Cloud space, offers IaaS and PaaS services like Amazon EC2, S3, Redshift, Beanstalk.
Google – Provides PaaS as Google App Engine and SaaS in form of Google Apps.
Microsoft Azure – Offers PaaS and IaaS services.
Salesforce – Popular as a SaaS service, provides sales collaborative tool known as The Sales Cloud.
Microsoft Office 365 – Provides MS office and other business productivity tools. It is a SaaS service.



Open source cloud offerings
Cloud Foundry – Developed by and hosted on VMware platform, offers MongoDB , MYSQL etc as PaaS offering.
OpenStack – IaaS project under Apache License, supported by companies like AMD, Brocade, SUSE Linux, Red Hat, Vmware, Yahoo, HP, IBM,Intel, Rackspace, Cisco, EMC among others.
Eucalytus – AWS compatible open source software for building private and public clouds.
Ubuntu One – File Synchronisation and backup platfrom.

The exam also had some scenario based questions on Amazon AWS. There were total 50 questions to be answered in 60 minutes timeframe.

Overall it was a good experience and i strongly recommend this certification for beginners and those who do not have any cloud experience and those who want to learn more about cloud computing. Clearing this exam would give you the confidence to study more on Amazon AWS and will act as a baby step for future accomplishments.
The certification cost is 3000/- INR. You can visit the vskills website at www.vskills.in/certification/

















Thursday, 13 June 2013

Unix / Solaris Password Expiration Automated email notification

I have been entrusted with setting up a mail alert system for user password expiration. The user should automatically get intimated through mail a few days before his password expiration date. I wrote a small script by taking help from www.unix.com and other forums.
Below is the script for checking the age of the password and alert the user if password is going to expire in next 15 days.

Script Name  :- /usr/bin/solchage

---script start here----

#!/usr/bin/bash
umask 0022
PATH=/usr/bin:/usr/sbin
SHADOW=/etc/shadow
DSHADOW=/etc/shadow.dummy
USER=$1


# Copy the contents of /etc/shadow to a dummy file and make sure the entries for system  
# users are not there in the dummy file. Also replace the encrypted password field with      
# *LK* to make sure passwords are not visible or cannot be copied by someone else.

cat ${SHADOW} | egrep -v "root|daemon|etc" | awk -F: '{print $1,"*LK*",$3,$4,$5,$6,$7,$8}' | sed 's/ /:/g' > ${DSHADOW}

PASSWDFILE=/etc/passwd

# Specify the mail domain of your company here.
DOMAIN=xyz
.com

# The next line extracts the users email id from GECOS field of /etc/passwd file. So as a pre # requisite to running this script, you must enter the email id of the user, without the            # domain name, in GECOS field as i have assumed here. Let me know if you can think of a
# more elegant way of extracting this information.

EMAIL=`grep ^${USER} ${PASSWDFILE} | awk -F: '{print $5}'`

# Save the message in a file.
FILE=/tmp/msg.$$


# Set the password policy here, i.e the number of days after which user must change              # password.
PWPOLICY=90

# Set the warning period here.
WARN=15


# Calculate the number of seconds elapsed since Jan 1 ,1970 i.e Unix epoch.

EPOCH=`perl -e 'print time;'`


# Convert the number of seconds into days.

DAYSEPOCH=`expr ${EPOCH} / 86400`


# Calculate the number of days since password was changed for the last time for a particular # user. This info can be extracted from 3rd field of /etc/passwd file. This is expressed as
# the number of days between January 1,  1970, and  the  date  that  the  password was last
# modified.


LASTCHG=`grep ^${USER} ${DSHADOW} | awk -F: '{print $3}'`



# Subtract the above value from the number of days since epoch to arrive at the number    #  of days since last password change. 

PASSWDCHANGE=`expr ${DAYSEPOCH} - ${LASTCHG}`


EXPIRED=`expr ${PWPOLICY} - ${PASSWDCHANGE}`


if [ "${EXPIRED}" -lt "${WARN}" ]; then

cat > ${FILE} <<EOF
Dear ${USER},

Your password will expire in ${EXPIRED} days. Please change it as soon as possible.
EOF


mailx -s "Password expiring soon." ${EMAIL}@${DOMAIN} < ${FILE}

fi--- script end here---

To run the above main script, you have to run another small script which i produce below.
Copy the above script and place it under /usr/bin and name it solchage. Ofcourse you can give it another name, its upto you but make corresponding changes in below script as well if you do so.

Lets name the second script as /var/pwexpire.sh. So put this script in crontab for execution once everyday. It will run for all users, and send them a mail if their password is going to expire within 15 days.

Script Name:- /var/pwexpire.sh

--- script begin here ---


cat /etc/passwd | egrep -v "root|daemon|etc|sys|adm|lp|uucp|nuucp|smmsp|listen|gdm|webservd|postgres|svctag|nobody|noaccess|nobody4" | awk -F: '{print $1}' | egrep -v "bin" | xargs -I {} /usr/bin/solchage {}

---script end here---


What the above script does ? Let us examine step by step.

1) It reads /etc/passwd file and cuts out system users from the list
2) Then prints the remaining usernames using awk and removes all other entries except first filed from the output.
3) Then xargs executes our script /usr/bin/solchage one by one for every listed user. This is required because the our script takes username as argument ( see USER=$1 above ) and runs for that particular user.

You will have to give execute permissions to both the scripts.

Sunday, 9 June 2013

Nagios server setup on Linux


1. Download the source code tarballs of both Nagios and the Nagios plugins (visit http://www.nagios.org/download/ for links to the latest versions).
wget http://osdn.dl.sourceforge.net/sourceforge/nagios/nagios-3.0.3.tar.gz
wget http://osdn.dl.sourceforge.net/sourceforge/nagiosplug/nagios-plugins-1.4.11.tar.gz

2. Login to the server as root user


3. Create a new nagios user account and assign it a password.
# useradd -m nagios
# passwd nagios

4. Create a new nagcmd group for allowing external commands to be submitted through the web interface. Add both the nagios user and the Apache user to the group.
# /usr/sbin/groupadd nagcmd
# /usr/sbin/usermod -G nagcmd nagios
# /usr/sbin/usermod -G nagcmd apache

5. Extract the Nagios source code tarball.
# cd ~/downloads
# tar xzf nagios-3.0.3.tar.gz
# cd nagios-3.0.3

Run the Nagios configure script, with the name of the group nagcmd created earlier :
# ./configure --with-command-group=nagcmd

6. Compile the Nagios source code.
# make all
Install binaries, init script, sample config files and set permissions on the external command directory as shown in the below steps
# make install
# make install-init
# make install-config
# make install-commandmode

7. Customize Configuration
Sample configuration files have now been installed in the /usr/local/nagios/etc directory. These sample files should work fine for getting started with Nagios. You'll need to make just one change before you proceed...
Edit the /usr/local/nagios/etc/objects/contacts.cfg config file using vi editor and change the email address associated with the nagiosadmin contact definition to your email address 
# vi /usr/local/nagios/etc/objects/contacts.cfg

8. Configure the Web Interface
Install the Nagios web config file in the Apache conf.d directory.
make install-webconf
# make install-webconf
/usr/bin/install -c -m 644 sample-config/httpd.conf /etc/httpd/conf.d/nagios.conf

8. Create a nagiosadmin account for logging into the Nagios web interface.
# htpasswd -c /usr/local/nagios/etc/htpasswd.users nagiosadmin
Restart Apache to make the new settings take effect.
# service httpd restart

9. Compile and Install the Nagios Plugins

# cd ~/downloads
# tar xzf nagios-plugins-1.4.11.tar.gz
# cd nagios-plugins-1.4.11
# ./configure --with-nagios-user=nagios --with-nagios-group=nagios
# make
# make install




10. Object configuration files
As mentioned, when the configuration files are split up, Nagios reads the data from these files in order for it to process host and service checks across the network. The templates for all these files are provided in localhosts.cfg file and we need to copy the definitions to separate files as shown below.

10.1 Create the configuration files

# cd /usr/local/nagios/etc/objects/
# touch hostgroup.cfg hosts.cfg services.cfg

10.2 Copy the Services Definitions

# vi localhost.cfg

# vi services.cfg
Paste the Services Definitions

#vi localhost.cfg
Copy the Host Definitions
#vi hosts.cfg
Paste the Host Definitions

#vi localhost.cfg
Copy the Host Definitions
# vi hostgroup.cfg
Paste the hostgroup Definitions

Setting up nagios.cfg
# cd /etc/nagios
# mv localhost.cfg localhost.cfg_org

Next configure the main nagios.cfg file .
# vi nagios.cfg
and make the changes shown below

# OBJECT CONFIGURATION FILE(S)
cfg_file=/etc/nagios/contacts.cfg
cfg_file=/etc/nagios/hostgroups.cfg
cfg_file=/etc/nagios/hosts.cfg
cfg_file=/etc/nagios/services.cfg
cfg_file=/etc/nagios/timeperiods.cfg

# EXTERNAL COMMAND OPTION
check_external_commands=1

# EXTERNAL COMMAND CHECK INTERVAL
command_check_interval=1


11. Starting Nagios

# chkconfig nagios on
# nagios -v nagios.cfg

Nagios 2.4
Copyright (c) 1999-2006 Ethan Galstad (http://www.nagios.org)
Last Modified: 05-31-2006
License: GPL

Reading configuration data...

Running pre-flight check on configuration data...

Total Warnings: 85
Total Errors:   0

Things look okay - No serious problems were detected during the pre-flight check

# service nagios start

Starting network monitor: nagios
 

SUNWjet server installation steps ( jumpstart )


SUNWjet is a new enhanced version of jumpstart and is easier to configure than older versions of jumpstart. You can download SUNWjet packge from OTN at this link http://www.oracle.com/technetwork/systems/jet-toolkit/index.html/

The steps to install and configure your JET server are:-
1) # pkgadd -d . SUNWjet   ( install the package)
2) # mount -o ro -F hsfs /dev/dsk/c0t4d0s2 /cdrom                  (mount the solaris DVD)
3) # /opt/jet/bin/copy_solaris_media /cdrom    (by default image will get copied to /export/install/media)
4) # /opt/jet/bin/list_solaris_locations
5) # mkdir /export/install/patches
6) # mkdir /export/install/pkgs
7) # /opt/jet/bin/make_template solclnt01     (create a template file)
8) # vi /opt/jet/Templates/solclnt01           (edit the 3 parameters listed below)
base_config_ClientArch="sun4u"
base_config_ClientEther=0:3:ba:ef:60:39
base_config_ClientOS="10"
9) # /opt/jet/bin/make_client solclnt01
 
From ok prompt of the client machine, type the below command to get started
10) ok   boot net - install -w
 
After this step, rest of the installation is vanilla.

Disabling sendmail daemon (SMTP) on solaris 10

The sendmail daemon runs on port 25 and is enable by default on solaris boxes.
The sendmail daemon is not needed to be running on servers which are meant to be mail clients. To disable sendmail service use below steps:-

1. Edit /etc/default/sendmail . Create the file if its not already there and include the following values:
MODE=Ac
QUEUEINTERVAL=”15m”


2. Stop the sendmail service 
/etc/init.d/sendmail stop

3. Now edit /etc/sendmail/submit.cf
and change the line shown here: D{MTAHost}[127.0.0.1]
to :
D{MTAHost}[<ur-mail-server-ip>]

4.  Start the sendmail service.
    /etc/init.d/sendmail start

Now port 25 on localhost would be disabled and server wont be listening on that port anymore.

Sunday, 19 May 2013

Sendmail with To, Cc , Bcc fields and subject ,message body etc




Sendmail is a pretty old mail interface and hence sending mail with all fields like Subject, Body and Content using sendmail is a bit tedious. I have seen users sometime getting frustrated with sendmail because they do not know how to use it properly and utilize its full features. They come to unix admin for help in sorting out their issues. In this post i will show how to add To, Cc , Bcc fields and subject / message body in the sendmail message. I will share a trick to get sendmail working as desired.


First create a sendmail.txt file with below contents :-

*********************************************************************************

From: root@pnc.com
To: abhi@xyz.com
Cc: nlm@xyz.com
Date: echo "`date`"
Subject: HELLO
Mime-Version: 1.0
Content-Type: text

Write the message body here , this is the mail content.  


*********************************************************************************

The above plain text file contains all the desired fields. Most important part is the Content-Type declaration. 

Then run the below command from unix prompt

# cat sendmail.txt | /usr/lib/sendmail -t 

The -t option of sendmail as per manpages


     -t          Read message for recipients.  To:, Cc:, and Bcc: lines will
                 be scanned for recipient addresses.  The Bcc: line will be
                 deleted before transmission.
 
The above command simply takes input for sendmail command from the text file we just created. Once you send the mail you can see all fields are visible and sendmail works as per your requirement. Do let me know if you face any issue in running sendmail with these options.
 
 

   

Friday, 17 May 2013

How to delete all files in a directory except the last 3 files ?

The trick here is as follows
1) Do the listing sorted by file modification time.
2) Use tail or head to cut out the 3 latest files and
3) Then do ’ rm’ on all files that remain.

Suppose you have to delete all files except the latest 3 from the folder /var/path_to_folder.
So the below command will accomplish your task.

# cd /var/path_to_folder && /usr/xpg4/bin/ls -t /var/path_to_folder | tail -n +3 | xargs rm –r


The above command lists all files in the directory sorted as per last modification time ( you may use –u switch to ls command to sort according to last access time instead of last modification time, see man page for ls for details), then tail command takes the listing as its input and cuts out the 3 files from top. Pay attention to the + sign before 3 , if i had used -3 in place +3 the result would have been something entirely different. The –n +3 operates on top of the listed output and removes the top 3 lines from the output. On the other hand , -n -3 would have operated on last 3 lines of the output and instead of removing those lines it would have printed out those lines to the terminal. The meanings and usage of + and - are very different. The usage also changes dramatically if i had used head instead of tail to cut out the 3 lines. Take care of this and before going on to our next step with "xargs and rm" command, be sure of the output you are getting here , as you would end up deleting last 3 files accidently if you misinterpreted the above.

Why i used xargs ? Because xargs constructs argument list and invokes the desired utility ( in our case ‘rm’ ) sequentially on each argument from the list. Our output after doing the tail is a list of files, one per line , as is usual when doing listing. The "rm" command wont act on these one per line arguments if supplied as it is to a single rm invocation. So xargs helps us here by invoking rm once for each filename.

svcadm enhancements in Solaris 10 - Bind to localhost

Some services in solaris 10 operate based on local and global properties.
For example, in rpcbind configuration if the value for local_only is set to true, all rpc services are accessible from inside the machine but an outside machine cannot access these services.

bash-3.00# svccfg -s rpc/bind listprop config/local_only
config/local_only  boolean  false
bash-3.00#
bash-3.00# svccfg -s rpc/bind setprop config/local_only=true
bash-3.00#
bash-3.00# svcadm refresh rpc/bind
bash-3.00#
bash-3.00# svccfg -s rpc/bind listprop config/local_only
config/local_only  boolean  true
bash-3.00#
bash-3.00# svccfg -s rpc/bind setprop config/local_only=false
bash-3.00#
bash-3.00# svcadm refresh rpc/bind
bash-3.00# svcadm refresh rpc/bind
bash-3.00#
bash-3.00# svccfg -s rpc/bind listprop config/local_only
config/local_only  boolean  false

This in my opinion is a significant security enhancement, especially in some cases where you want a particular service to be accessible from localhost but disabled for outside machines. Many solaris services have this kind of configurability.



Wednesday, 15 May 2013

Solaris 10, project files


On Solaris 10, system V  IPC paramters( e.g shmmax,shmseg) are set under user specific projects and these values take effect on a per-project basis only and are not system-wide values. We do not need to set them in /etc/system and even if set, the values are ignored.

All the processes started by users who are member of a project inherit the parameter values from /etc/project file.

hostA: /etc\> projects -l

user.oracle
        projid : 1001
        comment: "Oracle Project"
        users  : oracle
        groups : dba
                 oinstall
        attribs: process.max-sem-nsems=(priv,256,deny)
                 project.max-sem-ids=(priv,100,deny)
                 project.max-shm-ids=(priv,128,deny)
                 project.max-shm-memory=(priv,4294967296,deny)


hostA: /etc\> more /etc/project
system:0::::
user.root:1::::
noproject:2::::
default:3::::
group.staff:10::::
user.oracle:100:Oracle Project:oracle:dba,oinstall:process.max-sem-nsems=(priv,256,deny);project.max-sem-ids=(priv,100,deny);project.max-shm-ids=(priv,128,den
y);project.max-shm-memory=(priv,4294967296,deny)


      After editing the /etc/project file, we also need to give privilege to oracle user to be able to use projects, otherwise even if projects are created properly , it will not take effect.

ggnqccita2: /etc\> more /etc/user_attr
oracle::::project=user.oracle

The values set in /etc/project file are dynamic and do not need reboot to take effect. In previous versions of solaris, the values set in /etc/system did not take effect until reboot.


Tuesday, 14 May 2013

Can we recover a file from [lost + found] folder


Yes we can , but the recovered files after running fsck on the filesystem  will have absurd names like #p89t3590, hence containing correct owner, group and inode info but will not have correct names.
Files that appear in lost+found are typically files that were already unlinked (i.e. their name had been erased) but still opened by some process (so the data wasn't erased yet) when the system halted suddenly (kernel panic or power failure)
Files can also appear in lost+found because the filesystem was in an inconsistent state due to a software or hardware bug.

Two files with same name in a directory !!!


Have you ever seen two files with same name inside a directory ? Is it possible at all ? What if a user shows you this, right in front of your eyes ?

Actually, it is impossible to have two files of the same name in a directory,unix does not permit this. If at all such behaviour is observed, then one of the files must have control characters in its name!! and it simply means these are 2 different files and so must have different inode numbers.

Check the different inode numbers of the files with below command.

# ls -lib           -----> this command will show inode number and any control characters in the filename.

The man page for ls says that –b will ……
List nonprinting characters in the octal \ddd notation


Once we have the inode number ( in the example below i am using 2460 as inode number), we can easily use find command to move the dubious file to a safe location and delete it there.

# find / -xdev -inum 2460 -exec mv {} /tmp/wastebin/ \;
# rm –rf /tmp/wastebin