How to install ESXi 5.1

1. Check the server hardware you are installing ESXi 5 onto is supported and on the VMware HCL.
2. Login to the VMware license portal to check/upgrade/buy your vSphere licenses.

3. Minimum system requirements for installing ESXi.

Supported server platform


  • For a list of supported platforms, see the VMware Compatibility Guide.


64-bit Processor


  • ESXi 5.1 will install and run only on servers with 64-bit x86 CPUs.
  • ESXi 5.1 requires a host machine with at least two cores.
  • ESXi 5.1 supports only LAHF and SAHF CPU instructions.
  • ESXi 5.1 requires the NX/XD bit to be enabled for the CPU in the BIOS.
  • ESXi 5.1 supports a broad range of x64 multicore processors. For a complete list of supported processors, see the VMware Compatibility Guide.


RAM


  • 2GB RAM minimum
  • Provide at least 8GB of RAM to take full advantage of ESXi 5.1 features and run virtual machines in typical production environments.


Hardware Virtualization Support


  • To support 64-bit virtual machines, support for hardware virtualization (Intel VT-x or AMD RVI) must be enabled on x64 CPUs.
  • To determine whether your server has 64-bit VMware support, download the CPU Identification Utility from vmware.com.
  • Network Adapters
  • One or more Gigabit or 10Gb Ethernet controllers. For a list of supported network adapter models, see the VMware Compatibility Guide.


SCSI Adapter, Fibre Channel Adapter or Internal RAID Controller

Any combination of one or more of the following controllers:


  • Basic SCSI controllers. Adaptec Ultra-160 or Ultra-320, LSI Logic Fusion-MPT, or most NCR/Symbios SCSI.
  • RAID controllers. Dell PERC (Adaptec RAID or LSI MegaRAID), HP Smart Array RAID, or IBM (Adaptec) ServeRAID controllers.


Installation and Storage


  • SCSI disk or a local, non-network, RAID LUN with unpartitioned space for the virtual machines.
  • For Serial ATA (SATA), a disk connected through supported SAS controllers or supported on-board SATA controllers. SATA disks will be considered remote, not local. These disks will not be used as a scratch partition by default because they are seen as remote. Note: You cannot connect a SATA CD-ROM device to a virtual machine on an ESXi 5.1 host. To use the SATA CD-ROM device, you must use IDE emulation mode.
  • Supported storage system:  ESXi 5.1 supports installing on and booting from these storage systems:

  • SATA disk drives. SATA disk drives connected behind supported SAS controllers or supported on-board SATA controllers.
  1. LSI1068E (LSISAS3442E)
  2. LSI1068 (SAS 5)
  3. IBM ServeRAID 8K SAS controller 
  4. Smart Array P400/256 controller 
  5. Dell PERC 5.0.1 controller



  • SATA disk drives. Supported on-board SATA include:



  • Intel ICH9
  • NVIDIA MCP55
  • ServerWorks HT1000

Note: ESXi does not support using local, internal SATA drives on the host server to create VMFS datastores that are shared across multiple ESXi hosts.


  • Serial Attached SCSI (SAS) disk drives supported for installing ESXi 5.1 and for storing virtual machines on VMFS partitions.
  • Dedicated SAN disk on Fibre Channel or iSCSI
  • For a list of USB devices supported for installing ESXi 5.1, see the VMware Compatibility Guide.
  • You can install and boot ESXi from an FCoE LUN using VMware software FCoE adapters and network adapters with FCoE offload capabilities. See the vSphere Storage documentation for information about installing and booting ESXi with software FCoE.

4. Download the VMware ESXi 5 ISO file from the VMware download area.

5. Burn the ESXi 5 ISO to a CD.

6. Disconnect all Fibre Channel connections (if any) and boot the server from the CD.

7. Select "ESXi-5.1 Installer"



8. When ready to install press "Enter"

How to verify Etherchannel or Link Aggregate status

Use the entstat command to get the aggregate statistics of all of the adapters in the EtherChannel.

For example, entstat ent3 will display the aggregate statistics of ent3. Adding the -d flag will also display the statistics of each adapter individually. For example, typing entstat -d ent3 will show you the aggregate statistics of the EtherChannel as well as the statistics of each individual adapter in the EtherChannel.

Note: In the General Statistics section, the number shown in Adapter Reset Count is the number of failovers. In EtherChannel backup, coming back to the main EtherChannel from the backup adapter is not counted as a failover. Only failing over from the main channel to the backup is counted.

In the Number of Adapters field, the backup adapter is counted in the number displayed.

How to list Etherchannels or Link Aggregations

Use this procedure to list EtherChannels or Link Aggregations.


  1. On the command line, type smitty etherchannel.
  2. Select List All EtherChannels / Link Aggregations and press Enter.

How to modify Etherchannel on AIX

Use this procedure to detach the interface and make changes on AIX® 5.2 with 5200-01 and earlier.


  1. Type smitty chinet and select the interface belonging to your EtherChannel. Change the Current STATE attribute to detach, and press Enter.
  2. On the command line type, smitty etherchannel.
  3. Select Change / Show Characteristics of an EtherChannel / Link Aggregation and press Enter.
  4. Select the EtherChannel or Link Aggregation that you want to modify.
  5. Modify the attributes you want to change in your EtherChannel or Link Aggregation and press Enter.
  6. Fill in the necessary fields and press Enter.

How to configure Etherchannel in AIX

Use this procedure to configure an EtherChannel.

1)Type smitty etherchannel at the command line.

2)Select Add an EtherChannel / Link Aggregation from the list and press Enter.

3)Select the primary Ethernet adapters that you want on your EtherChannel and press Enter. If you are planning to use EtherChannel backup, do not select the adapter that you plan to use for the backup at this point. The EtherChannel backup option is available in AIX® 5.2 and later.

Note: The Available Network Adapters displays all Ethernet adapters. If you select an Ethernet adapter that is already being used (has an interface defined), you will get an error message. You first need to detach this interface if you want to use it.

Enter the information in the fields according to the following guidelines:

Parent Adapter: Provides information of an EtherChannel's parent device (for example, when an EtherChannel belongs to a Shared Ethernet Adapter). This field displays a value of NONE if the EtherChannel is not contained within another adapter (the default). If the EtherChannel is contained within another adapter, this field displays the parent adapter's name (for example, ent6). This field is informational only and cannot be modified. The parent adapter option is available in AIX 5.3 and later.

EtherChannel / Link Aggregation Adapters: You should see all primary adapters that you are using in your EtherChannel. You selected these adapters in the previous step.

Enable Alternate Address: This field is optional. Setting this to yes will enable you to specify a MAC address that you want the EtherChannel to use. If you set this option to no, the EtherChannel will use the MAC address of the first adapter.

Alternate Address: If you set Enable Alternate Address to yes, specify the MAC address that you want to use here. The address you specify must start with 0x and be a 12-digit hexadecimal address (for example, 0x001122334455).

Enable Gigabit Ethernet Jumbo Frames: This field is optional. In order to use this, your switch must support jumbo frames. This will only work with a Standard Ethernet (en) interface, not an IEEE 802.3 (et) interface. Set this to yes if you want to enable it.

Mode: You can choose from the following modes:

standard: In this mode the EtherChannel uses an algorithm to choose which adapter it will send the packets out on. The algorithm consists of taking a data value, dividing it by the number of adapters in the EtherChannel, and using the remainder (using the modulus operator) to identify the outgoing link. The Hash Mode value determines which data value is fed into this algorithm (see the Hash Mode attribute for an explanation of the different hash modes). For example, if the Hash Mode is standard, it will use the packet's destination IP address. If this is 10.10.10.11 and there are 2 adapters in the EtherChannel, (1 / 2) = 0 with remainder 1, so the second adapter is used (the adapters are numbered starting from 0). The adapters are numbered in the order they are listed in the SMIT menu. This is the default operation mode.

round_robin: In this mode the EtherChannel will rotate through the adapters, giving each adapter one packet before repeating. The packets may be sent out in a slightly different order than they were given to the EtherChannel, but it will make the best use of its bandwidth. It is an invalid combination to select this mode with a Hash Mode other than default. If you choose the round-robin mode, leave the Hash Mode value as default.

netif_backup: This option is available only in AIX 5.1 and AIX 4.3.3. In this mode, the EtherChannel will activate only one adapter at a time. The intention is that the adapters are plugged into different Ethernet switches, each of which is capable of getting to any other machine on the subnet or network. When a problem is detected with the direct connection (or optionally through the inability to ping a machine), the EtherChannel will deactivate the current adapter and activate a backup adapter. This mode is the only one that makes use of the Internet Address to Ping, Number of Retries, and Retry Timeout fields.

Network Interface Backup Mode does not exist as an explicit mode in AIX 5.2 and later. To enable Network Interface Backup Mode in AIX 5.2 and later, you can configure multiple adapters in the primary EtherChannel and a backup adapter.

8023ad: This options enables the use of the IEEE 802.3ad Link Aggregation Control Protocol (LACP) for automatic link aggregation.

Hash Mode: Choose from the following hash modes, which will determine the data value that will be used by the algorithm to determine the outgoing adapter:
default: The destination IP address of the packet is used to determine the outgoing adapter. For non-IP traffic (such as ARP), the last byte of the destination MAC address is used to do the calculation. This mode guarantees packets are sent out over the EtherChannel in the order they were received, but it may not make full use of the bandwidth.

src_port: The source UDP or TCP port value of the packet is used to determine the outgoing adapter. If the packet is not UDP or TCP traffic, the last byte of the destination IP address will be used. If the packet is not IP traffic, the last byte of the destination MAC address will be used.

dst_port: The destination UDP or TCP port value of the packet is used to determine the outgoing adapter. If the packet is not UDP or TCP traffic, the last byte of the destination IP will be used. If the packet is not IP traffic, the last byte of the destination MAC address is used.

src_dst_port: The source and destination UDP or TCP port values of the packet is used to determine the outgoing adapter (specifically, the source and destination ports are added and then divided by two before being fed into the algorithm). If the packet is not UDP or TCP traffic, the last byte of the destination IP is used. If the packet is not IP traffic, the last byte of the destination MAC address will be used. This mode can give good packet distribution in most situations, both for clients and servers.

Note: It is an invalid combination to select a Hash Mode other than default with a Mode of round_robin.


Backup Adapter: This field is optional. Enter the adapter that you want to use as your EtherChannel backup.

Internet Address to Ping: This field is optional and only takes effect if you are running Network Interface Backup mode or if you have one or more adapters in the EtherChannel and a backup adapter. The EtherChannel will ping the IP address or host name that you specify here. If the EtherChannel is unable to ping this address for the number of times specified in the Number of Retries field and in the intervals specified in the Retry Timeout field, the EtherChannel will switch adapters.

Number of Retries: Enter the number of ping response failures that are allowed before the EtherChannel switches adapters. The default is three. This field is optional and valid only if you have set an Internet Address to Ping.

Retry Timeout: Enter the number of seconds between the times when the EtherChannel will ping the Internet Address to Ping. The default is one second. This field is optional and valid only if you have set an Internet Address to Ping.

5)Press Enter after changing the desired fields to create the EtherChannel.

6)Configure IP over the newly-created EtherChannel device by typing smitty chinet at the command line.

7)Select your new EtherChannel interface from the list.

8)Fill in all of the required fields and press Enter.

How to map disk to lpar from VIOS

Please use below command to present disk from VIOS to lpar

mkvdev -vdev hdisk48 -vadapter vhost11 -dev vdlg732_datavg


What is PowerVM in AIX


What is PowerVM ?

Licensed software/firmware feature which enables IBM virtualization technology on IBM POWER systems.
Available on Power5, Power6 and Power7 Systems
Allows AIX 5L V5.3 or later and Linux LPARs to run without physical adapters.

It is available in the following 3 editions
1. IBM PowerVM Express Edition
2. IBM PowerVM Standard Edition
3. IBM PowerVM Enterprise Edition

Virtual I/O server is available as part of the PowerVM Editions (formarley known as Advanced POWER Virtualization) feature.

Virtual I/O Server facilitates:

1. Sharing of physical resources between LPARs on the system.
2. Creating LPARS without requiring additional physical resources like Network Adapter, HBA, SCSI Adapter.
3. Creating more LPARs than there are I/O slots or physical devices.
4. Maximizing use of phyiscal resources on the system

N-Port Id Virtualization (NPIV) - PowerVM


N-Port Id Virtualization :

1 Power feature (actually an industry standard) for virtualizing a phyiscal fibre channel port.
2. It allows multiple LPARs to share a physical fibre channel HBA.
3. Each logical HBA on the LPARs with have their own WWPN address (given in pairs; second WWPN is used for LPM) which can be used for SAN zoning.
4. Each physical HBA port can support upto 64 virtual ports.
5. Compatible with Live Partition Mobility.


NPIV Requirements :
- POWER6 or later
- FC 5735 PCI 8GB FC adapter (it comes with 2 ports)
- VIOS 2.1 or later
- HMC 7.3.4 or later
- OS
-- AIX 5.3 TL09 SP 2
-- AIX 6.1 TL02 SP2
-- AIX 7.1 TLxx SPx
-- SLES 10 SP2
-- RHEL 4.7 or later

You also need to have a NPIV capable SAN switch. So first of all check with your storage team before procuring NPIV capable servers.


How to create the Virtual adapters for the VIO and the LPARs ?
You should create the virtual FC adapters for the VIO and the LPARs by logging onto HMC as like we do the VSCSI adapters.

How to configure SAN Zoning ?
SAN zoning should be based on the WWPN from client's Virtual FC adapter and NOT the VIOS server adapters. You need to be very careful with this. Otherwise you would not see the SAN LUNs fron the partition.

Here are some commands that can be used on the VIO server.

To map a Physical HBA port to a virtual FC adapter :
# vfsmap -vadapter vfchost0 -fcp fcs0

To unmap a Physical HBA port and a virtual FC adapter :
# vfsmap -vadapter vfchost0 -fcp

To list the mapping between a specific virtual and physical FC adapters :
# lsmap -npiv -vadapter vfchost0

To list the mapping between all Virtual and Physical FC adapters :
# lsmap -all -npiv

To list the available NPIV capable ports :
# lsnports

To list the Virtual FC adapter details :
# lsdev -dev vfchost0

To list the NPIV physical FC adapter details :
# lsdev -dev fcs0

To monitor I/O traffic on a virtual FChost (server side virtual adapter) :
# viostat -adapter vfchost1

Here are some commands that can be used on HMC.

To list the virtual FC adapters on all the lpars on a managed system :
# lshwres --rsubtype fc -m managed-system --level lpar -r virtualio

To list the WWPN and to check whether its active or not on all the LPARs in a managed system :
# lsnportlogin -m managed-system --filter "profile-names=normal"

Here are some commands that can be used on LPAR level.

To view the WWPN of a virtual FC adatper :
# lscfg -vpl fcs0 | grep Net

To view t he statistics on a virtual FC adapter (client) :
# fcstat fcs0

Sometimes you may need to set a specific WWPN on the virtual adapters on the client.
You can use the below commands (in HMC) during that scenario.

To list the Current Profile details:
hscroot@hmc1:~> lssyscfg -r prof -m sys709 --filter "lpar_ids=30,"profile_names=Normal""
name=Normal,lpar_name=lpar01,lpar_id=30,lpar_env=aixlinux,all_resources=0,min_mem=1024,desired_mem=1536,max_mem=2048,min_num_huge_pages=0,desired_num_huge_pages=0,max_num_huge_pages=0,mem_mode=ded,mem_expansion=0.0,hpt_ratio=1:64,proc_mode=shared,min_proc_units=0.1,desired_proc_units=0.2,max_proc_units=1.0,min_procs=1,desired_procs=1,max_procs=3,sharing_mode=uncap,uncap_weight=128,shared_proc_pool_id=0,shared_proc_pool_name=DefaultPool,affinity_group_id=none,io_slots=none,lpar_io_pool_ids=none,max_virtual_slots=50,"virtual_serial_adapters=0/server/1/any//any/1,1/server/1/any//any/1","virtual_scsi_adapters=20/client/2/sys506_vios2/4/1,10/client/1/sys506_vios1/4/1",virtual_eth_adapters=2/0/2//0/1/ETHERNET0//all/0,vtpm_adapters=none,"virtual_fc_adapters=""29/client/1/sys709_vios1/29/c506000000000009,c506000000000010/0"",""30/client/2/sys506_vios2/30/c506000000000011,c506000000000012/1""",hca_adapters=none,boot_mode=norm,conn_monitoring=1,auto_start=0,power_ctrl_lpar_ids=none,work_group_id=none,redundant_err_path_reporting=0,bsr_arrays=0,lhea_logical_ports=none,lhea_capabilities=none,lpar_proc_compat_mode=default,electronic_err_reporting=null


To change the WWPN of the Virtual FC adapters (at slot numbers 29 and 30) on a LPAR Profile:
hscroot@hmc1:~> chsyscfg -r prof -m sys709 -i name=Normal, lpar_name=lpar01, \"virtual_fc_adapters=\"\"29/client/1/sys709_vios1/29/c506000000000009,c506000000000010/0\"\",\"\"30/client/2/sys709_vios2/30/c506000000000011,c506000000000012/1\"\"\"

Virtual SCSI - PowerVM


Commands for VIO Server : 

To view the current reserve policy of a disk :
# lsdev -dev hdisk2 -attr reserve_policy

To set the reservice policy of a disk :
# chdev -dev hdisk2 -attr reserve_policy=no_reserve -P

To view the current values of all the attribtues of a FC adapter :
# lsdev -dev fcs0 -attr

To modify the attributes of a FC adapter :
# chdev -dev fcs0 -attr fc_err_recov=fast_fail dyntrk=yes -P

To map a disk to a Virtual Server SCSI adapter :
# mkvdev -vdev hdisk2 -vadapter vhost0 -dev lpar1_vtd

Note: "-dev lpar1_vtd" is an option to specify the VTD name.
If not specified, it would take a default name.

To unmap a disk fro a Virtual Server SCSI adapter (Technically removing a VTD device) :
# rmvdev -vtd lpar1_vtd

To list all the backing device and Virtual Server SCSI adapter mapping :
# lsmap -all

To list all the backing devices mapped to a Virtual Server SCSI adapter :
# lsmap -vadapter vhost0


Commands for AIX sever :

To list the atttributes of a virtual disk :
# lsattr -El hdisk0

To set the necessary attributes of a virtual disk :
# chdev -l hdisk0 -a hcheck_mode=nonactive hcheck_interval=20 algorithm=fail_over

To list all the paths for the virtual disks :
# lspath

To list the parent device of a virtual disk :
# lsparent -CH -l hdisk1

To display more details for virtual disks :
# lspath -H -F "status name parent path_id connection"

Virtual Ethernet - PowerVM


Commands for the VIO Servers :

To list all the adapters :
# lsdev -type adapter

To list all the virtual adapters :
# lsdev -virtual

To list the configuration of a Ethernet adapter (including its MAC Address) :
# lscfg -l ent3

To list all the slots (Physical  and Virtual) along with the devices :
# lsdev -slots

To create a SEA using the physical adapter (ent0), virtual adapter (ent2), PVID as 1 :
# mkvdev -sea ent0 -vadapter ent2 -default ent2 -defaultid 1

To create a SEA using the physical adapter (ent0), virtual adapter (ent2), PVID as 1, control channel (ent4) :
# mkvdev -sea ent0 -vadapter ent2 -default ent2 -defaultid 1 -attr ha_mode=auto ctl_chan=ent4

To identify the speed and duplex on a physical adapter :
# lsdev -dev ent0 -att | grep media_speed

To set the speed and duplex on a physical adapter :
# chdev -dev ent0 -attr media_speed=100_Full_Duplex

To remove the current tcpip/ip configuration :
# rmtcpip -all

To configure the IP address on a network interface :
# mktcpip -hostname vios1 -inetaddr 192.168.2.1 -interface ent3 -netmask 255.255.255.0 -gateway 192.168.2.1

To list the IP table :
# lstcpip -num -state

To list the routing table :
# lstcpip -routtable

To list the Physical/Virtual/Shared Ethernet Adapter mapping :
# lsmap -all -net

To identify the port VLAN ID of a virtual adapter :
# entstat -all ent2 | grep "Port VLAN ID"

To identify the port VLAN ID of a virtual adapter :
# entstat -all ent2 | grep "Switch ID"

To identify the Control Channel of a SEA :
# entstat -all ent5 | grep "Control Channel"

To identify the priority of a SEA :
# entstat -all ent5 | grep "Priority"

Commands for AIX Partition :

To identify the VLAN ID :
# entstat -d ent0 | grep ID

To identify the MAC Address :
# entstat -d ent0 | grep Address

How to update Virtual I/O Server ?

Follow the below steps to update your VIO server :

1. Shutdown the VIO clients. This would not be required in case of dual VIO setup.

2. Apply the update using the below command (you have to use the proper syntax)

# updateios

3. Reboot the Virtual I/O server

# shutdown -restart

4. Once the server comes back online, login and check the OS level

# ioslevel

5. After few weeks, you may have to commit the applied filesets

# updateios -commit


Here are the various ways of updating a Virtual I/O server

To update Virtual I/O server from a local directory :
# updateios -dev /tmp/viopack -install -accept


To update Virtual I/O servers from remote filesystem :
# mount NFS-server:/share-name /mnt
# updateios -dev /mnt -install -accept


To update Virtual I/O server from an optical drive :
# updateios -dev /dev/cd0 -install -accept


To commit all the uncommited filesets and then to update Virtual I/O server from an optical drive :
# updateios -f -dev /dev/cd0 -install -accept


Now let us look at the various uses of updateios command.

To commit  all the applied filesets :
# udpateios -commit

To clean up after an interrupted installation :
# updateios -cleanup

To reject all the applied (uncommited) filesets :
# updateios -reject

To remove a fileststem from Virtual I/O server :
# updateios -remove fileset-name

How to backup VIOS in AIX

BACKUP OF VG STRUCTURE :

You can backup the structure of any volume group so that the configuration data will be stored under /tmp/vgdata. This will be automatically done if you run backupios (and even the configuration data will be automatically backed up).

Here are some commands.

To backup the structure of datavg :
# savevgstruct datavg

To display a list of saved volume groups
# restorevgstruct -ls

To restore the structure of datavg on 2 available disks :
# restorevgstruct -vg datavg hdisk2 hdisk3



BACKUP OF USER-DEFINED VIRTUAL DEVICES :

You can take a backup of all the user-defined virtual devices using the viobr command. But it requires minimum VIOS level 2.1.2.0.  Backup file includes logical devices like storage pools, SEA, virtual server SCSI/FC/Ethernet adapters, device attributes for disks, optical devices, tape devices, LHEA, ethernet devices/interfaces.

Here are some commands

To take a backup of user-defined virutal devices in an XML file :
# viosbr -backup -file filename

To view the backup information :
# viosbr -view -file filename

To restore the virtual device configuration :
# viosbr -restore -file filename

To validate the backup file :
# viosbr -restore -validate -file filename


BACKUP OF VO SERVER :

To perform backup on a tape media :
# backupios -tape /dev/rmt0

To perform backup on a DVD media :
# backupios -cd /dev/cd0 -udf

To perform backup on a DVD-RAM disk :
# backupios -cd /dev/cd0 -udf -accept

To take backup on a file (Creates a file called nim_resources.tar) :
# backupios -file /mnt

To perform mksysb backup :
# backupios -file /mnt/vioserver1.mksysb -mksysb

Networking in VIO Server

Networking in VIO Server



To configure initial TCPIP setup:
# mktcpip -hostname vios1 -inetaddr 192.168.10.55 -interface en0 -netmask 255.255.255.0 -gateway 192.168.10.1 -nsrvdomain mydomain.com -start

To list stored tcpip configuration:
# lstcpip -stored

To list ethernet adapters on the server:
# lstcpip -adapters

To show system hostname:
# lstcpip -hostname

To show dns servers:
# lstcpip -namesrv

To display routing table:
# lstcpip -routtable

To display routing table in numbers:
# lstcpip -num -routtable

To list all open inet sockets:
# lstcpip –sockets –family inet

To show the state of all configured network interfaces:
# lstcpip –state

To flush (remove) all tcpip settings:
# rmtcpip -all

To unconfigure a network interface:
# rmtcpip -interface en0

To clean up routing table:
# rmtcpip –f -routing

To remove DNS information:
# rmcpip –namesrv

To unconfigure tcpip information on en0 during next reboot:
# rmtcpip –f –interface en0 -nextboot

To add an entry to /etc/hosts:
# hostmap -addr 192.168.10.34 -host alpha

To list the contents of hosts file:
# hostmap -ls

To remove a specific entry on hosts files:
# hostmap -rm 192.168.10.34


To enable / start all network services:
# startnetsvc ALL

To disable / stop all network services:
# stopnetsvc ALL

To enable telnet on a VIO Server:
# startnetsvc telnet

To enable ftp on a VIO ServerL
# startnetsvc ftp

To enable ldap daemon:
# startnetsvc ldap

To enable xntpd:
# startnetsvc xntpd

To enable cimserver:
# startnetsvc cimserver

To send CLI tracing info to system log:
# startnetsvc tracelog

To send system error log to the system log:
# startnetsvc errorlog

To list the start of ftp daemon:
# lsnetsvc ftp

To add a domain name entry:
# cfgnamesrv -add -dname abc.aus.century.com

To add a name server entry:
# cfgnamesrv –add -ipaddr 192.9.201.1

To list all resolv.conf entries:
# cfgnamesrv  -ls

To display statistics on a network interface:
# entstat en0

To reset statistics on a network interface:
# entstat –reset ent0

To trace a route:
# traceroute nis.nsf.net

History of AIX

Versions : 

AIX 5L 5.3, August 2004
NFS Version 4 support
Advanced Accounting
Virtual SCSI
Virtual Ethernet
Simultaneous multithreading (SMT) support
Micro-Partitioning support
POWER5 support
JFS2 Quota support
JFS2 Filesystem shrink support


AIX 5L 5.2, October 2002
Minimum level required for POWER5 hardware
Support for MPIO Fibre Channel disks
iSCSI Initiator software
Dynamic LPAR support


AIX 5L 5.1, May 2001
Minimum level required for POWER4 hardware and the last release that supported Micro Channel architecture
Introduction of 64-bit kernel, installed but not activated by default
Introduction of JFS2
Static LPAR support
The L stands for Linux affinity
Trusted Computing Base (TCB)


AIX 4.3.3, September 1999
Added online backup function
Workload Management (WLM)


AIX 4.3.2, October 1998

AIX 4.3.1, April 1998

AIX 4.3, October 1997
Support for 64-bit architecture
Support for IPv6

AIX 4.2.1, April 1997
NFS Version 3 support


AIX 4.2, May 1996

AIX 4.1.5, August 1996

AIX 4.1.4, October 1995

AIX 4.1.3, July 1995
CDE 1.0 became the default GUI environment, replacing Motif X Window
Manager.

AIX 4.1.1, October 1994

AIX 4.1, August 1994

AIX v4, 1994

AIX v3.2 1992

AIX v3.1
Introduction of Journaled File System (JFS)

AIX v3, February 1990
Developer release licensed only to OSF; the LVM was incorporated into OSF/1.
SMIT was introduced.

AIX v2.0
last version was 2.2.1.

AIX v1, 1986
last version was 1.3

Bootlist in AIX

Boot List - Overview

BOOT LIST/MODE:

There are 2 types of boot.
a) Normal mode
b) Service mode.

They are explained below

Normal Boot:

A normal boot is represented by the runlevel 2. This is the type of boot which is used while the system is in running/ production state.

To view bootlist for service mode,
# bootlist -m service -o hdisk0
hdisk1

To set bootlist for normal mode boot use the below command,

bootlist -m normal hdisk0 hdisk1 rmt0 cd0

You can also change this in SMS menu.

Service Boot:
The service boot list is used when booting the system for maintenance tasks. No applications or network services will be started.

To view bootlist for service mode,
# bootlist -m service -o fd0 cd0 rmt0 hdisk2 ent0

Another feature introduced with AIX Version 4.2 is the use of generic device names. Instead of pointing out the specified disk, with hdisk0 or hdisk1, you can use the generic definition of SCSI disks.

For example: # bootlist -m service cd rmt scdisk

This will cause the system to probe any CD, then probe any tape drive, and finally, probe any SCSI disk for a BLV. The actual probing of the disk is a check of sector 0 for a boot record, which, in turn, will point out the boot image.


You can also change bootlist using diag.


At the Diag Main Menu, select Task Selections, choose Display or Change Bootlist. Finally, you have to choose whether to change the Normal mode bootlist or the Service mode bootlist.

AIX Commands – Part I

Volume Group Commands



Display all VGs:

# lsvg



Display all active VGs:

# lsvg –o



Display info about rootvg,

# lsvg rootvg



Display info about all LVs in all VGs,

# lsvg -o |lsvg –il



Display info about all PVs in rootvg

# lsvg -p rootvg



Create VG with name vgxx on hdisk1 with partition size 8MB,

# mkvg -s 8 hdisk1



Create VG with name sivg on hdisk1 with partition size 8MB,

# mkvg -s 8 -y sivg hdisk1



Create sivg on hdisk1 with PP size 4 and no of partions 2 * 1016,

# mkvg -s 4 -t 2 -y sivg hdisk1



To make VG newvg automatically activated at startup,

# chvg -a y newvg



To deactivate the automatic activation at startup,

# chvg -a n newvg



To change maximum no. of PP to 2032 on vg newvg,

# chvg -t 2 newvg



To disable quorum on VG newvg,

# chvg -Q n newvg



Reorganises PP allocation of VG newvg,

# reorgvg newvg



Add PV hdisk3 and hdisk4 to VG newvg,

# extendvg newvg hdisk3 hdisk4



Exports the VG newvg,

# exportvg newvg



Import the hdisk2 with name newvg, and assign major number 44,

# importvg -V 44 -y newvg hdisk2



Remove PV hdisk3 from VG newvg,

# reducevg newvg hdisk3



To deactviate VG newvg,

# varyoffvg newvg



To activate VG newvg,

# varyonvg newvg



To sync the mirrored LV in the VG sivg,

# syncvg -v sivg



To mirror LVs of sivg with hdisk2 (-m for exact mirror, -S forbackground mirror),

# mirrorvg -S -m sivg hdisk2



To remove the mirrored PV from the set,

# unmirrorvg sivg hdisk2



To synchronize ODM with LVM(VGDA) for datavg,

# synclvodm datavg

NIM Network Installation Manager in AIX


Required Filesets:

For Server - bos.sysmgt.nim.master and bos.sysmgt.nim.spot
For Client - bos.sysmgt.nim.client

Few Resource Definitions:

SPOT - Shared Product Object Tree is a directory containing files required to boot a machine and the boot image

LPP_SOURCE - Licensed Program Product source is a directory containing images/filesets that AIX uses to load software

MKSYSB - Mksysb resource used to build a machine

Requirements for NIM Server:

Disk Space :
1. 3 GB per base lpp_source resource
2. 500 MB + per mksysb resource
3. 500 MB per SPOT resource
4. Additional buffer space for future growth

Other Requirements:
# Minimum 512 MB real memory
# 10 or 100 MBPS ethernet adapter


My Recommendations for NIM VG and Filesystems :


1. Create a seperate VG called 'nimvg' with enough space.


2. Create the following filesystems in nimvg based upon your requirement


 a. /tftpboot - To hold boot images
 b. /export/nim - To hold the resources like SPOT, LPP, Mksysb

Directory Structure :
/export/nim/lpp_source - To hold lpp source resources
/export/nim/spot             - To hold spot resources
/export/nim/mksysb       - To hold the mksysb backup for clients

Naming Schemes:

Follow the below schemes to easily identify during regular operations :

spot530TL6              - SPOT for AIX V 5.3 TL 6
spot530TL9              - SPOT for AIX V 5.3 TL 9
lpp_source530TL6  - LPP_SOURCE for AIX V 5.3 TL 6
lpp_source530TL9  - LPP_SOURCE for AIX V 5.3 TL 6
client_server1          -  Mksysb image of the host server1
client_server2          - Mksysb image of the hsot server2


How to setup the NIM Master :

0. Create the /tftpboot and /export/nim file systems as per yoru requirement

1. Initial setup of NIM Master
  a. ODM database
  b. Boot Area: /tftpboot directory that is used to store boot files (images)
  c. /etc/niminfo         -  Is the Key configuration file that exists on both master and clients
  d. nimesis daemon - This is the daemon which used to communicate with the nim clients

2. Insert the AIX CD into the master server's CD Drive

3. Create LPP_SOURCE and SPOT resources


Commands to manage NIM master and clients:


NIM - Network Installation Manager


Required Filesets:

For Server - bos.sysmgt.nim.master and bos.sysmgt.nim.spot
For Client - bos.sysmgt.nim.client

Few Resource Definitions:

SPOT - Shared Product Object Tree is a directory containing files required to boot a machine and the boot image

LPP_SOURCE - Licensed Program Product source is a directory containing images/filesets that AIX uses to load software

MKSYSB - Mksysb resource used to build a machine

Requirements for NIM Server:

Disk Space :
1. 3 GB per base lpp_source resource
2. 500 MB + per mksysb resource
3. 500 MB per SPOT resource
4. Additional buffer space for future growth

Other Requirements:
# Minimum 512 MB real memory
# 10 or 100 MBPS ethernet adapter


My Recommendations for NIM VG and Filesystems :


1. Create a seperate VG called 'nimvg' with enough space.


2. Create the following filesystems in nimvg based upon your requirement


 a. /tftpboot - To hold boot images
 b. /export/nim - To hold the resources like SPOT, LPP, Mksysb

Directory Structure :
/export/nim/lpp_source - To hold lpp source resources
/export/nim/spot             - To hold spot resources
/export/nim/mksysb       - To hold the mksysb backup for clients

Naming Schemes:

Follow the below schemes to easily identify during regular operations :

spot530TL6              - SPOT for AIX V 5.3 TL 6
spot530TL9              - SPOT for AIX V 5.3 TL 9
lpp_source530TL6  - LPP_SOURCE for AIX V 5.3 TL 6
lpp_source530TL9  - LPP_SOURCE for AIX V 5.3 TL 6
client_server1          -  Mksysb image of the host server1
client_server2          - Mksysb image of the hsot server2


How to setup the NIM Master :

Create the /tftpboot and /export/nim file systems as per yoru requirement

1. Initial setup of NIM Master
  a. ODM database
  b. Boot Area: /tftpboot directory that is used to store boot files (images)
  c. /etc/niminfo         -  Is the Key configuration file that exists on both master and clients
  d. nimesis daemon - This is the daemon which used to communicate with the nim clients

2. Insert the AIX CD into the master server's CD Drive

3. Create LPP_SOURCE and SPOT resources


Commands to manage NIM master and clients:

To setup NIM Server:
# nim_master_setup -B -a device=/dev/cd0 -a file_system=/nim -a volume_group=nimvg


To setup NIM installation in a client:
# smitty nim_bosinst


To view the status of NIM installation in a NIM client:
# lsnim -l client_hostname


To define a lpp_source resource:
# nim -o define -t lpp_source -a source=/dev/cd0 -a server=master -a location=/nim/lpp_source/AIX_5_3_4 AIX_5_3_4

To define a spot resource:
# nim -o define -t spot -a server=master -a location=/export/nim/spot -a source=lpp_source530 spot530
To remove a resource:
# nim -o remove AIX_5_3_4


To initialize a NIM client for diag operation:
# nim -o diag client_hostname


To initialize a NIM client for maintenance operation:
# nim -o maint client_hostname


To unconfigure a NIM server:
# nim -o unconfig master_server


To allocate a SPOT to a NIM client:
# nim -o allocate -a spot=AIX_5_3 client_hostname


To deallocate a SPOT from a NIM client:
# nim -o deallocate -a spot=AIX_5_3 client_hostname


To remove a NIM client after deallocating all its resources:
# nim -o remove client_hostname


To reboot a client:
# nim -o reboot client_hostname


To list all the NIM resources:
# lsnim


To list detailed information about a nim client:
# lsnim -l client_hostname


To list the resources allocated to a NIM client:
# lsnim -c resources client_hostname

Performance Monitoring and Tuning in AIX

Performance Monitoring :

1. How to find out the system-wide memory usage ?
# svmon -G -i 2 5

2. How to list top 10 memory consuming processes ?

You can use any of the below commands
# svmon -Put 10
# ps aux head -1; ps aux sort -rn +3 head

3. How to list top 10 cpu consuming processes ?

# ps aux head -1; ps aux sort -rn +2 head -10

4. What is the best command for general performance monitoring :

# topas

You can even use 'jtopas', which is a java based system monitoring tool.

5. How to start trace for the entire system ?

# trace -a

6. How to stop trace ?

# trcstop

7. Where is the log file for trace tool located ?

/var/adm/ras/trcfile

8. What is the command used to generate trace report from a trace log file ?

# trcrpt

9. How to generate report on utilization statistics related to an LPAR ?

# lparstat

10. How to display the LPAR configuration report ?

# lpstat -i

11. What are the mostly used commands to find the cpu, memory,disk i/o statistics ?

# sar -> CPU, Memory statistics
# vmstat -> CPU, Memory statistics
# iostat -> CPU, Disk I/O satistics
# topas -> CPU, Memory, Network and Disk I/O statistics
# ps aux -> CPU, Memory statistics

12. How to display processes related to a specific user ?

# ps -fu username

13. How to list all the 64bit processes running in a system ?

# ps -efM

14. How to enable Interface Specific Network Options in AIX ?
# no -o use_isno=1

By enabling use_isno option, you can set buffer settings on a specific interface, giving you better control over performance management of network interfaces.

15. What is 'thewall' and how to set ?
'thewall' in AIX defines the upper limit for network kernel buffers.

When running AIX 5L V5.3 running a 32 bit kernel is 1GB or half the size of real memory depending on which of the two is the smallest. If you have AIX 5L V5.3 running a 64bit kernel the size of thewall will be 65GB or half the size of real memory, depending on which of the two is smaller.
To display the size of the thewall,

# no -o thewall

Note:
the size of thewall is static from AIX 5L Version 5.1 and later, and cannot be changed, to reduce the upper limit of memory used for networking make use of the maxmbuf tunable.

16. What is maxbuf tunable variable and how to set it ?

The maxmbuf tunable used by AIX specifies the maximum amount of memory that can be used by the networking subsystem.

It can displayed by using the below command,
# lsattr -El sys0 -a maxbuf

By default the maxmbuf tunable is disabled, it is set to 0, this means that the value of thewall will be used to define the maximum amount of memory used for network communications. By setting a non zero value to maxmbuf will override the value of thewall. This is the only way of reducing the value set by thewall.

The value of maxbuf's is defined by 1Kb units. To set its value to 1GB,
# chdev -l sys0 -a maxmbuf=1000000

17. How to find out the media speed of a network interface ?

# netstat -v ent0 grep Media

18. How to view the statistics for a specific network adapter ?

# entstat -d ent1

You can also use "netstat -v ent1".

19. How to reset the above network statistics ?

# entstat -r ent1

20. How to start iptrace on a specific network adapter ?

To Start :
# startsrc -s iptrace -a "-i en0 iptrc.out" &

To Stop:
# stopsrc -s iptrace

21. How to generate report from the iptrace's output file ?

# ipreport -r -s iptrc.out > ipreport

22. How to get the NFS statistics ?

NFS server RPC statistics : # nfsstat -sr
NFS server NFS statistics : # nfsstat -sn
NFS client RPC statistics : # netstat -cr
NFS client NFS statistics : #netstat -cn
Statistics on mounted file systems : # nfsstat -m
To reset the nfsstat statistics : # nfsstat -z

23. How to list the current values of all the network tunables?

# no -a

24. How to display the current value of a specific network tunable?

# no -o tcp_recvspace

25. How to display all the values (current, default, boot, min, max..) values of a network tunable ?

# no -L tcp_recvspace

26. What is the file that holds the next boot tunables's values ?

/etc/tunables/nextboot

27. What is the file that automatically generated with all the values of the network tunables that were set immediately after the reboot ?

/etc/tunables/lastboot

28. How to change the current value of a network tunable's value as well as add the entry to the /etc/tunables/nextboot file ?

Use the 'p' flag in the no command.

For Ex., # no -p -o tcp_recvspace=16k

29. How to display all the NFS network variables ?

# nfs -a

30. How to enable the collection of disk input/output statistics ?

# chdev -l sys0 -a iostat=true

31. How to display the 5 busiest logical volumes in a VG ?

# lvmstat -v datavg -c 5

32. How to display, enable and disable the statistics collection for a VG ?

To enable: # lvmstat -v datavg -e
To disable: # lvmstat -v datavg -d
To show : # lvmstat -v datavg

33. How to display the statistics for a LV ?

# lvmstat -l lv001

34. How to report disk statistics ?

# sar -d 5 60
Abovc command displays the disk i/o statistics 60 times in 5 sec interval.

35. How to list top 10 real memory consuming processes ?

# svmon -Put 10

36. How to list top 10 paging space consuming processes ?

# svmon -Pgt 10

37. How to list the files opened by a process ?

# svmon -pP Process_id

38. How to find out the memory usage of a specific process ?

# svmon -wP

39. How to display the paging (swap) usage ?

# swap -s allocated = 4718592 blocks used = 1475527 blocks free = 3243065 blocks

#swap -l
device maj,min total free
/dev/paging02 38, 4 4608MB 3166MB
/dev/paging01 38, 3 4608MB 3168MB
/dev/paging00 10, 14 4608MB 3167MB
/dev/hd6 10, 2 4608MB 3167MB

How to backup and Restore in AIX


Few Points:
a. rootvg backup can be taken thru mksysb command. Most people refer it as mksysb backup.
b. This type of backup on a tape is bootable. Hence its been widely used to restore the system incase of system crash.
c. Mksysb backup contains 4 images
i. BOS Boot Image - Kernel Device Drivers
ii. mkinsttape image - ./image.data, ./tapeblksz, ./bosinst.data and few other commands
iii. dummy .toc - Nothing but a dummy table of contents file
iv. rootvg data - This is where the actual data resides

Files used by mksysb:

/image.data :
Contains information about the image installed during the BOS installation process. This includes the sizes, names, mount points of LVs and file systems in the rootvg [Actually nothing but the rootvg structure]. It can be created using mkszfile command.

/var/adm/ras/bosinst.data :
It allows you to customize the OS installtion. Modified mostly to use the mksysb image to install new servers.

Few Commands :

To generate just /image.data :
# mkszfile

To create /image.data and generate a system backup on the tape :
# mksysb -i /dev/rmt0

To generate a system backup on the tape but to exclude /home directory and to create /image.data :
# echo /home > /etc/exclude.rootvg
# mksysb -ei /dev/rmt0

To list the contents of a mksysb image :
# lsmksysb -f /backup/system1.mksysb

To restore a specific file from mskysb image :
a. Rewind the tape :
# tctl -f /dev/rmt0 rewind
b. Move the tape forward to the end of 3rd image :
# tctl -f /dev/rmt0.1 fsf 3
c. Restore the specific file:
# restore -xqvf /dev/rmt0.1 /home/user1/file1

Non-rootvg Backup :

Few Points:
a. Volume groups other than rootvg can be backup'ed using savevg command.
b. You can exclude certain files by creating /etc/exclude.vgname.
c. VG data files are kept under,  /tmp/vgdata/vg-name/vg-name.data.

Few Commands :

To backup a datavg to the tape drive :
# savevg -if /dev/rmt0 datavg

To backup a datavg to the tape drive and exclude certain files :
# savevg -ief /dev/rmt0 datavg

To restore the datavg image from the tape onto the disks specified in /tmp/vgdata/datavg/datavg.data file :
# restvg -f /dev/rmt0

To create the data file (/tmp/vgdata/oravg/oravg.data) for oravg :
# mkvgdata oravg

File System Backup :
File systems can be backup'ed using many ways.Few commands used for this are backup, cpio, dsm [TSM Client].

To back up all the files and subdirectories in the /home directory using full path names :
# find /home -print l backup -if /dev/rmt0

To back up all the files and subdirectories in the /home directory using relative path names :
# cd /home
# find . -print l backup -if /dev/rmt0

I have used "l" for the pipe symbol as it was not displaying properly in blog.

To backup a list of files:
# cat bakfile
/home/raja/file1.txt
/home/raja/file2.txt
/home/raja/file3.txt

# backup -iqvf /dev/rmt0

I-node Base Backup:


Here is  the syntax for the backup command in case of inode based backup

Syntax:
# backup [-u] [-level] [-f device] filesystem

u -> Updates the /etc/dumpdates file
-level -> Values available from 0 to 9 where 0 is the full backup and 1-9 for backup the changes since the previous level.

To back up the /home file system by i-node :
# backup -0 -uf /dev/rmt0 /home

i-node based backup has the advantage of making incremental and differential backups. Here we use the numeric flags (0 to 9) to make the difference in the way it back ups. It actually updates the date/time/mode of last backup in /etc/dumpdates when you use the 'u' flag.

Here are the different numeric flags used and their meanings,

0 - Full Backup
1 - Back ups the files created/modified from the date/time of 0th backup
2 - Back ups the files created/modified from the date/time of 1th backup
3 - Back ups the files created/modified from the date/time of 2th backup
4 - Back ups the files created/modified from the date/time of 3th backup
5 - Back ups the files created/modified from the date/time of 4th backup
6 - Back ups the files created/modified from the date/time of 5th backup
7- Back ups the files created/modified from the date/time of 6th backup
8 - Back ups the files created/modified from the date/time of 7th backup
9 - Back ups the files created/modified from the date/time of 8th backup

Few Examples for i-node based backup :


Scenario 1 - Full and Incremental Backup :


If you want to have full backup of /home on sunday night and incremental backup on other nights, then follow the below procedure.

Sunday Night - Full Backup :
# backup -0 -uf /dev/rmt0 /home

Monday Night - Incremental Backup :
# backup -1 -uf /dev/rmt0 /home

Tuesday Night - Incremental Backup
# backup -2 -uf /dev/rmt0 /home

Wednesday Night - Incremental Backup :
# backup -3 -uf /dev/rmt0 /home

Thursday Night - Incremental Backup :
# backup -4 -uf /dev/rmt0 /home

Friday Night - Incremental Backup
# backup -5 -uf /dev/rmt0 /home

Saturday Night - Incremental Backup :
# backup -6 -uf /dev/rmt0 /home

Advantages of Incremental Backup:
a. It takes lesser time to restore the specific data.For example, if you lose a file that was created on thursday morning, then you need only the wednesday night tape to restore the specific file.
b. This method consumes less tapes for backup. Hence its cost effective compared to differential backup.

Disadvantages of Incremental Backup:
a. You need more number of tapes (from sunday-full to last night-incre) to restore the entire file system.

Scenario 2 - Full and Differential Backup :


If you want to have full backup of /home on sunday night and differential backup on other nights, then follow the below procedure.

Sunday Night - Full Backup :
# backup -0 -uf /dev/rmt0 /home

Monday Night - Differential Backup :
# backup -1 -uf /dev/rmt0 /home

Tuesday Night - Differential Backup :
# backup -1 -uf /dev/rmt0 /home

Wednesday Night - Differential Backup :
# backup -1 -uf /dev/rmt0 /home

Thursday Night - Differential Backup :
# backup -1 -uf /dev/rmt0 /home

Friday Night - Differential Backup :
# backup -1 -uf /dev/rmt0 /home

Saturday Night - Differential Backup :
# backup -1 -uf /dev/rmt0 /home

Advantages of Differential Backup:
a. It takes less tapes (from sunday-full and last night-incre) to restore the entire file system.Hence it is easy for the backup operator to restore the data.

Disadvantages of Differential Backup:
a. It consumes more tapes for backup. (since we are going to backup the same old files again and again for the whole week).Hence cost is more in this type of backup.

To list the contents of backup on the tape :
# restore -Tvf /dev/rmt0

To restore individual files from backup created by 'backup -i' command :
# restore -xvf /dev/rmt0 /home/user1/file1

To restore the entire file system :
# restore -rvf /dev/rmt0

Other Unix Backup Commands:

TAR:
tar is the only command (i think so) in UNIX which doesnot require a dash(-) infront of a flag.

To create a tar image in /tmp for a directory :
# tar cvf /tmp/oradata.tar /opt/oradata

To view the contents of a tar image :
# tar tvf /tmp/oradata.tar

To restore the tar image :
# tar xvf /tmp/oradata.tar

CPIO :
cpio reads and writes from stdin and stdout.

To backup the current directory to a /tmp/file.cpio file :
# find . -print cpio -ov > /tmp/file.cpio
To view the table of contents of the cpio archived tape :
# cpio -itvcC1 < /dev/rmt0


To restore data from the cpio archive file :
# cpio -idv < /tmp/file.cpio

To restore a selective file from cpio archived tape :
# cpio -imv /home/roger/.profile < /dev/rmt0

To restore selectively only the *.c and *.cpp file :
# cpio -i "*.c" "*.cpp" < /dev/rmt0


DD Command :
'dd' command copies (also converts) from an input device to an output device. This command will not span multiple tapes.
To copy a file and converting all the chars to upper case :
# dd if=/tmp/unixfile.txt of=/tmp/dosfile.txt conv=ucase

Tcopy Command :
Copies from one tape device to another.
To list the contents of a tape media :
# tcopy /dev/rmt0
To copy all the data from one tape to another tape device :
# tcopy /dev/rmt0 /dev/rmt1

tctl Command :
tctl has been widely used to control tape drives.
To rewind a tape device :
# tctl -f /dev/rm0 rewind
To fast forward to the beginning of 2nd tape mark :
# tctl -f /dev/rmt0.1 fsf
To do retension :
# tctl -f /dev/rmt0 retension
Retension is nothing but moving the tape to beginning, end and again to the beginning. You have to do this, if you encounter multiple read errors during the restoration operation.
To display the status of atape device :
# tctl -f /dev/rmt0 status
To eject a tape device :
# tctl -f /dev/rmt0 offline
TSM Client :

Few Points:
a. It requires a connection to TSM server and also a registeration in the TSM server.
b. You can take the 'backup' and 'archive' based on the TSM server configuration.
c. Archive can be taken for 90 days, 180 days, ... based on the management class.
d. Backup can have different versions. Last backup is the new and current version and oldest backup is the old version for every file that is backed up.
e. Most of the small sized companies keep 3 versions of backup, that means can have 3 backup version for each file.

To Backup a file :
# dsm backup /tmp/file1

To archive a file :
# dsm archive /tmp/file1

To list all the backed up filesystems :
# dsm query filespace

To verify the backup of a file :
# dsmc query backup /tmp/file1

To verify the inactive version of a backup of a file :
# dsmc query backup -inactive /tmp/file1

To verify the archive of a file :
# dsmc query archive /tmp/file1


To backup VIO Server :

To take the OS backup in a CD ROM :
# backupios –cd /dev/cd1 -cdformat

To take the OS backup in a DVD-RAM :
# backupios –cd /dev/cd1 -udf

To take the OS backup in a tape drive :
# backupios –tape /dev/rmt0

To verify the backup available in a tape :
# backupios –tape /dev/rmt0 -verify

To generate vio backup (tar file) in a file  :
# backupios –file /opt/file1

To generate vio backup (mksysb image) in a file :
# backupios –file /opt/file1 -mksysb

Note:    To restore a backup image on vio server, you have to use "installios" command in HMC. installios is a menu driven command (tool) which will ask for machine name, vio server(lpar) name, profile name for restoring the mksysb image.

How to create filesystem in AIX

# mklv -y oracle_lv Oraclevg 10G
# crfs –v jfs/jfs2 –d oracle_lv –m /ora –A yes

or


crfs -v jfs2 -A yes -g oraclevg -m /ora -a size=25G

How to rename devices in AIX

Starting with AIX 7.1, you can now easily rename devices. A new command called rendev was introduced to allow AIX administrators to rename devices as required.

From the man page:

The rendev command enables devices to be renamed. The device to be renamed, is specified with the -l flag, and the new desired name is specified with the -n flag.

The new desired name must not exceed 15 characters in length. If the name has already been used or is present in the /dev directory, the operation fails. If the name formed by appending  the new name after the character r is already used as a device name, or appears in the /dev directory, the operation fails.

 If the device is in the Available state, the rendev command must unconfigure the device before renaming it. This is similar to the operation performed by the rmdev -l Name command. If the unconfigure operation fails, the renaming will also fail. If the unconfigure succeeds, the rendev command will configure the device, after renaming it, to restore it to the Available state. The -u flag may be used to prevent the device from being configured again after it is renamed.

 Some devices may have special requirements on their names in order for other devices or applications to use them. Using the rendev command to rename such a device may result in the device being unusable. Note: To protect the configuration database, the rendev command cannot be interrupted once it has started. Trying to stop this command before completion, could result in a corrupted database.

Here are some examples of using the rendev command on AIX 7.1 system. In the first example I will rename hdisk3 to hdisk300. Note: hdisk3 is not in use (busy).
If the disk had been allocated to a volume group, I would have needed to varyoff the volume group first.


# lspv
hdisk0          00f61ab2f73e46e2                    rootvg          active
hdisk1          00f61ab20bf28ac6                    None
hdisk2          00f61ab2202f7c0b                    None
hdisk4          00f61ab20b97190d                    None
hdisk3          00f61ab2202f93ab                    None

# rendev -l hdisk3 -n hdisk300

# lspv
hdisk0          00f61ab2f73e46e2                    rootvg          active
hdisk1          00f61ab20bf28ac6                    None
hdisk2          00f61ab2202f7c0b                    None
hdisk4          00f61ab20b97190d                    None
hdisk300        00f61ab2202f93ab                    None

Next, I’ll rename a virtual SCSI adapter. I renamed vscsi0 to vscsi2. Note: I placed the adapter, vscsi0, in a Defined state before renaming the device.

# rmdev -Rl vscsi0

# lsdev -Cc adapter
ent0   Available  Virtual I/O Ethernet Adapter (l-lan)
ent1   Available  Virtual I/O Ethernet Adapter (l-lan)
vsa0   Available  LPAR Virtual Serial Adapter
vscsi0 Defined    Virtual SCSI Client Adapter
vscsi1 Available  Virtual SCSI Client Adapter

# rendev -l vscsi0 -n vscsi2

# lsdev -Cc adapter
ent0   Available  Virtual I/O Ethernet Adapter (l-lan)
ent1   Available  Virtual I/O Ethernet Adapter (l-lan)
vsa0   Available  LPAR Virtual Serial Adapter
vscsi1 Available  Virtual SCSI Client Adapter
vscsi2 Defined    Virtual SCSI Client Adapter

Now I’ll rename a network adapter from ent0 to ent10. I bring down the interface before changing the device name

# lsdev -Cc adapter
ent0   Available  Virtual I/O Ethernet Adapter (l-lan)
ent1   Available  Virtual I/O Ethernet Adapter (l-lan)
vsa0   Available  LPAR Virtual Serial Adapter
vscsi1 Available  Virtual SCSI Client Adapter
vscsi2 Defined    Virtual SCSI Client Adapter

# ifconfig en0
en0: flags=1e080863,480<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFFLOAD(ACTIVE),CHAIN>
        inet 10.1.20.19 netmask 0xffff0000 broadcast 10.153.255.255
         tcp_sendspace 262144 tcp_recvspace 262144 rfc1323 1

# ifconfig en0 down detach

# rendev -l ent0 -n ent10

# lsdev -Cc adapter
ent1   Available  Virtual I/O Ethernet Adapter (l-lan)
ent10  Available  Virtual I/O Ethernet Adapter (l-lan)
vsa0   Available  LPAR Virtual Serial Adapter
vscsi1 Available  Virtual SCSI Client Adapter
vscsi2 Defined    Virtual SCSI Client Adapter

# rendev -l en0 -n en10

# chdev -l en10 -a state=up
en10 changed

# mkdev -l inet0
inet0 Available

# ifconfig en10
en10: flags=1e080863,480<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFFLOAD(ACTIVE),CHAIN>
        inet 10.1.20.19 netmask 0xffff0000 broadcast 10.153.255.255
         tcp_sendspace 262144 tcp_recvspace 262144 rfc1323 1


If you want to be creative you can rename devices to anything you like (as long as it’s not more than 15 characters). For example I’ll rename vscsi2 to myvscsiadapter.


# rendev -l vscsi2 -n myvscsiadapter
# lsdev -Cc adapter
ent1           Available  Virtual I/O Ethernet Adapter (l-lan)
myadapter      Available  Virtual I/O Ethernet Adapter (l-lan)
myvscsiadapter Defined    Virtual SCSI Client Adapter
vsa0           Available  LPAR Virtual Serial Adapter
vscsi1         Available  Virtual SCSI Client Adapter

And in the last example I’ll demonstrate changing virtual SCSI adapter device names on a live system.

This is single disk system (hdisk0), with two vscsi adapters.

# lspv
hdisk0          00f6048868b4deee                    rootvg          active

# lspath
Enabled hdisk0 vscsi0
Enabled hdisk0 vscsi1

# lsdev -Cc adapter
ent0   Available  Virtual I/O Ethernet Adapter (l-lan)
ent1   Available  Virtual I/O Ethernet Adapter (l-lan)
vsa0   Available  LPAR Virtual Serial Adapter
vscsi0 Available  Virtual SCSI Client Adapter
vscsi1 Available  Virtual SCSI Client Adapter

We ensure the adapter is in a Defined state before renaming it. This will fail otherwise.

# rmdev -Rl vscsi1
vscsi1 Defined
# lsdev -Cc adapter | grep vscsi
vscsi0 Available  Virtual SCSI Client Adapter
vscsi1 Defined    Virtual SCSI Client Adapter

Now we rename the adapter vscsi1 to vscsi3.

# rendev -l vscsi1 -n vscsi3

# lsdev -Cc adapter | grep vscsi
vscsi0 Available  Virtual SCSI Client Adapter
vscsi3 Defined    Virtual SCSI Client Adapter

That was easy enough. Now I need to bring the adapter and path online with cfgmgr. The lspath output displays an additional path to vscsi3.

# lspath
Enabled hdisk0 vscsi0
Defined hdisk0 vscsi1

# cfgmgr
Method error (/etc/methods/cfgscsidisk -l hdisk0 ):
        0514-082 The requested function could only be performed for some
                 of the specified paths.

# lspath
Enabled hdisk0 vscsi0
Defined hdisk0 vscsi1
Enabled hdisk0 vscsi3

Now I need to remove the old path to vscsi1. The path to vscsi3 is now Enabled. The adapter, vscsi3, is in an Available state. All is good.

# rmpath -l hdisk0 -p vscsi1 -d
path Deleted

# lspath
Enabled hdisk0 vscsi0
Enabled hdisk0 vscsi3

# lsdev -Cc adapter | grep vscsi
vscsi0 Available  Virtual SCSI Client Adapter
vscsi3 Available  Virtual SCSI Client Adapter

The same steps need to be repeated for the vscsi0 adapter. This is renamed to vscsi2.

# rmdev -Rl vscsi0
vscsi0 Defined
# lsdev -Cc adapter | grep vscsi
vscsi0 Defined    Virtual SCSI Client Adapter
vscsi3 Available  Virtual SCSI Client Adapter


# rendev -l vscsi0 -n vscsi2

# lsdev -Cc adapter | grep vscsi
vscsi2 Defined    Virtual SCSI Client Adapter
vscsi3 Available  Virtual SCSI Client Adapter


# lspath
Defined hdisk0 vscsi0
Enabled hdisk0 vscsi3

# cfgmgr
Method error (/etc/methods/cfgscsidisk -l hdisk0 ):
        0514-082 The requested function could only be performed for some
                 of the specified paths.

# lspath
Defined hdisk0 vscsi0
Enabled hdisk0 vscsi2
Enabled hdisk0 vscsi3

# rmpath -l hdisk0 -p vscsi0 -d
path Deleted

# cfgmgr
# lspath
Enabled hdisk0 vscsi2
Enabled hdisk0 vscsi3

That’s it. Both adapters have been renamed while the system was in use. No downtime required.

# lsdev -Cc adapter | grep vscsi
vscsi2 Available  Virtual SCSI Client Adapter
vscsi3 Available  Virtual SCSI Client Adapter

# lspath
Enabled hdisk0 vscsi2
Enabled hdisk0 vscsi3


How to create logical volume in AIX


Create a logical volume on one node:

mklv -y logical_volume_name volume_group_name size

For example: mklv -y mylogicalvol myvolgroup 100M

mklv -y datalv datavg 10G

How to find CPU speed in AIX OS


When using AIX 5.1 and subsequent releases, the following code returns the processor speed in hertz (Hz):
lsattr -E -l proc0 | grep "Processor Speed"

Also, in AIX 5.1 pmcycles command lists the processor speed:
pmcycles

How to find CPU information in AIX OS

Use below mentioned commands to find CPU information AIX OS

prtconf

pmcycles

lscfg

lsattr -El proc0

lsdev -Cc processor

How to find the processor type in AIX OS

To determine the processor type for the English version of AIX, at the shell command prompt, type:

prtconf | grep -i "Processor Type"

To validate AIX is the English version, at the shell command prompt, type:

     echo $LANG

This will be set to "C" for the English version.

Depending upon the processor type and installation, the above command may give the following types of output:

On a Power 4 type of processor:

$ prtconf | grep -i "Processor Type"
Processor Type: PowerPC_POWER4

On a non Power 4 type RISC based processor:

$ prtconf | grep -i "Processor Type"
Processor Type:  PowerPC_RS64-II

On AIX systems prtconf utility can be found in /usr/sbin and the grep utility can be found in /usr/bin directories.

Changing disk ownership on Netapp cluster

Change to special advanced mode
FILER1> priv set advanced
Warning: These advanced commands are potentially dangerous; use
them only when directed to do so by NetApp personnel.

Show UNOWNED disks
FILER1*> disk show -n
disk show: No disks match option -n.

Turn off auto disk ownership
FILER1*> options disk.auto_assign off

Remove ownership on disk
FILER1*> disk remove_ownership 0a.80
Disk 0a.80 will have its ownership removedVolumes must be taken offline. Are all impacted volumes offline(y/n)?? y

Show UNOWNED disks
FILER1*> disk show -n
DISK OWNER POOL SERIAL NUMBER
------------ ------------- ----- -------------
0a.80 Not Owned NONE 3QQ1JM8K00009951WAX4

To change disk ownership to FILER2
FILER2> priv set advanced

Assign disks to FILER2
FILER2*> disk assign all

Turn ON auto disk ownership again on FILER1
FILER1*> options disk.auto_assign on

To get out of advanced mode
FILER1*> priv set
FILER2*> priv set

Change NetApp disk ownership

Changing Disk Ownership

All current NetApp storage controller systems can assume ownership of any disks it can see regardless of physical cabling through the use of software based ownership. This gives you flexibility in provisioning disks in a clustered environment. Changing ownership used to require fighting with the disk reassign command or downtime to go into maintenance mode. A little known option to the disk assign command will allow you to change ownership without taking downtime.



Examples

controller1> disk assign –s unowned 0a.23 - remove ownership on system that owns the disk (controller1).
controller2> disk show –n - the disk shows up as unowned with the physical address it has on controller2
controller2> disk assign 0b.23 - take ownership of the unowned disk on the partner system


What It Means To You

Before software disk ownership, you had to take downtime to physically move disks from one cluster partner to another to expand a fast growing aggregate or to provide another spare drive. With the –s unowned option you can reassign ownership of a spare disk without taking downtime. This also makes reassignment of disks on a new FAS2050, 3140 and 3170 cluster easier (on power up, these systems which share a single chassis have a tendancy to randomly take possession of internal SAS/SATA disks in a higgledy-piggledy manner ).

How do I subscribe to the Optional or Supplementary channels in RHN Classic ?

If using Red Hat Network (RHN) Classic


  • Log into the Customer Portal at https://access.redhat.com (as a user with Organization Administrator role).
  • On the Subscriptions main menu tab, select RHN Classic > Registered Systems
  • Choose the relevant system in the system list.
  • Click on Alter Channel Subscriptions.
  • Select the RHN Tools for RHEL5/6, Server/Workstation Optional for RHEL5/6 or **Optional Productivity Apps (RHEL5) channel for Optional, or select the RHEL Server/Workstation Supplementary (RHEL6) or RHEL Supplemental (RHEL5) channel for Supplementary.
  • Click the Change Subscriptions button at the bottom of the page.

Red Hat Enterprise Linux 6.3 registration is failing with "ImportError: /usr/lib64/python2.6/site-packages/_xmlplus/parsers/pyexpat.so: symbol XML_SetHashSalt, version EXPAT_2_0_1_RH not defined in file libexpat.so.1 with link time reference"

Issue

Unable to register system when trying to execute rhn_register


Environment

Red Hat Enterprise Linux 6.3
python-2.6.6-29.el6_2.2.x86_64.rpm
python-libs-2.6.6-29.el6_2.2.x86_64.rpm


Resolution

Update the python and python-libs packages to the latest version.
Download python-libs-2.6.6-36.el6.x86_64.rpm and python-2.6.6-36.el6.x86_64.rpm on the client system (from RHN/Satellite) and then execute the below mentioned command to install these packages on your system:

# yum install python-libs-2.6.6-36.el6.x86_64.rpm python-2.6.6-36.el6.x86_64.rpm

How do I apply package updates from the Red Hat Network?

Prerequisite: registration

Systems must be registered before updates from RHN can be applied.

Red Hat Enterprise Linux 5 and later

Before installing an update, make sure all previously released errata relevant to the system have been applied.

To access updates when using Red Hat Enterprise Linux 5, launch the graphical update tool through Applications -> System Tools -> Software Updater, or from the command line via the following command:

# pup


To access updates when using Red Hat Enterprise Linux 6, launch the graphical update tool through System -> Administration -> Software Update, or from the command line via the following command:

# gpk-update-viewer


For a command line interface, use the following command to update the operating system:

# yum update


To install a specific package, such as vsftpd, use the following command:

# yum install vsftpd



To update a specific package, such as bind, use the following command:

# yum update bind

How can I access Red Hat Network via an HTTP proxy?

Red Hat Enterprise Linux 3, 4 , 5 and 6 all use the rhn_register command when registering the machine with RHN or RHN Satellite.

1) Open /etc/sysconfig/rhn/up2date for editing

2) Change the following:

   enableProxy=0

   To the following:

   enableProxy=1


3) If proxy authentication is required, set enableProxyAuth to 1:

   enableProxyAuth=1

4) Enter the user's password (if required) for the http proxy where it states the following:

   proxyPassword=

5) Enter the user's username (if required) for the http proxy where it states the following:

   proxyUser=

6) Enter the URL for the proxy server, in host:port format, in the following line:

   httpProxy=

7) Save the file.

create a volume group

mkvg -y datavg hdisk3


route command for windows

route add 10.0.9.0 mask 255.255.255.0 gateway 192.168.98.1 -p


10.0.9.0 is IP range of destination network

192.168.98.1 is default gateway of source network

Increasing the size of a LUN

You can increase the size of your thinly provisioned or space-reserved LUNs with the lun resize command.

About this task

If the configured size of a LUN is filled and you cannot write additional blocks to that LUN, you can increase the size of the LUN with the lun resize command, provided your volume contains enough space. For example, if you configure the size of a LUN to be 50 GB and all 50 GB of that LUN is filled with data, you can use the lun resize command to increase the size of your LUN provided the volume has available space.

You do not have to take your LUN offline to increase the size of that LUN. When you increase the size of your LUN, Data ONTAP automatically notifies the initiator that the LUN size has increased.

You can grow a LUN to approximately 10 times its original size. For example, if you create a 100 GB LUN, you can grow that LUN to approximately 1,000 GB. However, you cannot exceed 16 TB, which is the approximate maximum LUN size limit.

Use the lun resize command to increase the size of your LUN.

Example

lun resize /vol/italy/venice +10g

Above command will increase lun size of  venice lun  by 10GB.

lun resize /vol/italy/venice -10g

Above command will decrease lun size of venice lun by 10GB.

create thin provisioned lun

Command to create a thin provisioned lun in NetApp

lun create -s size -t type -o noreserve /vol/path

Example

lun create -s 40g -t windows -o noreserve /vol/italy/venice

Above command will create a 40GB thin provision lun of type Windows.

-o noreserve is the parameter which tells NetApp to create a thin provisioned lun.


luns mapped to a initiator group

Below is the command used to find out luns mapped to a initiator group

lun show -v -g initiatorgroup

Example

lun show -v -g italy_ig


Clone existing lun

Below is the command to clone a single lun with in the volume,

clone start /vol/italy/MASTER_W2K8ENT /vol/italy/venice_os

this command will be usefull when you have a master OS lun and want to create copies of OS lun.



Add wwns to existing initiator group

igroup add initiatorgroup wwn

Example

igroup add Server8 10:01:39:54:00:40:E2:02

Create an initiator group in NetApp



igroup create {-i | -f} -t ostype initiator_group [node ...]

-i specifies that the igroup contains iSCSI node names.

-f specifies that the igroup contains FCP WWPNs.

-t ostype indicates the operating system type of the initiator. The values are solaris, Solaris_efi, windows, windows_gpt, windows_2008, hpux, aix, linux, netware, vmware, xen, and hyper_v.

initiator_group is the name you specify as the name of the igroup.

node is a list of iSCSI node names or FCP WWPNs, separated by spaces.

Example
iSCSI example:

igroup create -i -t windows WindowsServer iqn.1991-05.com.microsoft:host5.domain.com

FCP example:

igroup create -f -t aix AIXserver 10:00:00:00:0c:2b:cc:92

please note that initiator group gets created with ALUA enabled.

Map a NetApp lun to initiatorgroup


lun map lun_path initiator_group [lun_id]

lun_path is the path name of the LUN you created.

initiator_group is the name of the igroup you created.

lun_id is the identification number that the initiator uses when the LUN is mapped to it. If you do not enter a number, Data ONTAP generates the next available LUN ID number.

Example

The following command maps /vol/italy/venice to the igroup venice_ig at LUN ID 0:

lun map /vol/italy/venice venice_ig 0

Command to create lun in NetApp

lun create -s size -t ostype lun_path

-s size indicates the size of the LUN to be created, in bytes by default.

-t ostype indicates the LUN type. The LUN type refers to the operating system type, which determines the geometry used to store data on the LUN.

lun_path is the LUN’s path name that includes the volume and qtree.

Example

The following example command creates a 600-GB LUN called /vol/italy/venice that is accessible by a Windows host. Space reservation is enabled for the LUN.
lun create -s 600g -t windows /vol/italy/venice

vol is aggregate
italy is volume
venice is required lun

Please note that above command creates a thick provisioned NetApp lun

Put the the host in Maintenance Mode

Put the host in Maintenance Mode

esxcli system maintenanceMode set -e true -t 0 
 
 

Power ON Lpar from HMC

chsysstate -m aix10-SN65158BE -o on -r lpar -n aix10 -f default


Shutdown lpar from HMC

chsysstate -m <managedsysname> -r lpar -n <lparname> -o shutdown --immed


Dislpay managed systems lpar list from HMC

lssyscfg -r lpar -m managedsystemname -F name

Display list of Managed systems from HMC

lssyscfg -r sys -F name

Delete HBA card from AIX VIO

rmdev -Rdl fcs1

# rmdev -Rdl fcs1
fcnet1 deleted
fscsi1 deleted
fcs1 deleted

Find Status of HBA card in AIX

lsattr -El fscsi0

# lsattr -El fscsi0
attach       switch       How this adapter is CONNECTED         False
dyntrk       no           Dynamic Tracking of FC Devices        True
fc_err_recov delayed_fail FC Fabric Event Error RECOVERY Policy True
scsi_id      0x11300      Adapter SCSI ID                       False
sw_fc_class  3            FC Class for Fabric                   True

Find WWN of HBA card in AIX

lscfg -vpl fcs0


# lscfg -vpl fcs0
  fcs0             U78A0.001.DNWHWC3-P1-C3-T1  4Gb FC PCI Express Adapter (df1000fe)

        Part Number.................10N7255
        Serial Number...............1C94508830
        Manufacturer................001C
        EC Level....................D76626
        Customer Card ID Number.....5774
        FRU Number.................. 10N7255
        Device Specific.(ZM)........3
        Network Address.............10000000C99393EE
        ROS Level and ID............02E8277F
        Device Specific.(Z0)........2057706D
        Device Specific.(Z1)........00000000
        Device Specific.(Z2)........00000000
        Device Specific.(Z3)........03000909
        Device Specific.(Z4)........FFE01212
        Device Specific.(Z5)........02E8277F
        Device Specific.(Z6)........06E12715
        Device Specific.(Z7)........07E1277F
        Device Specific.(Z8)........20000000C99393EE
        Device Specific.(Z9)........ZS2.71X15
        Device Specific.(ZA)........Z1F2.70A5
        Device Specific.(ZB)........Z2F2.71X15
        Device Specific.(ZC)........00000000
        Hardware Location Code......U78A0.001.DNWHWC3-P1-C3-T1


  PLATFORM SPECIFIC

  Name:  fibre-channel
    Model:  LPe11002
    Node:  fibre-channel@0
    Device Type:  fcp
    Physical Location: U78A0.001.DNWHWC3-P1-C3-T1

Install NetApp Multipath Drivers on VIO

copy NetApp multipath drivers to VIO

NetApp Multipath Drivers

ntap_aix_host_utilities_6.0 (Contains MPIO,NON MPIO,SAN_Tool_Kit Directories)

Change to directory MPIO and run below command

cd ntap_aix_host_utilities_6.0
cd MPIO


smitty installp

Then select "Install Software"
Select present working directory "." in Input Device and then enter
Change "ACCEPT new license agreements" to Yes
Then Enter

Then Install SAN tool kit

cd SAN_Tool_Kit
Then select "Install Software"
Select present working directory "." in Input Device and then enter
Change "ACCEPT new license agreements" to Yes
Then Enter



Increase filesystem in AIX

chfs -a size=+4G /

Tivoli Product Documentation

For IBM Tivoli product documentation,please refer to below link

http://www.tivolisupport.com

Install lsof in AIX 6.1

Download lsof-4.84-1.aix6.1.ppc.rpm from below mentioned urls

http://www.perzl.org/aix/index.php?n=Main.Lsof

copy the downloaded rpm to AIX box
rpm -ivh lsof-4.84-1.aix6.1.ppc.rpm

Install rpm's in AIX

rpm -ivh lsof-4.84-1.aix6.1.ppc.rpm

Transfer files from IO Director

file copy IODirectorlog scp://root@redhatlinuxserver/destinationfolder

Generate log files from IO Director

get-log-files -all
show tech-support

Rescan vhba from Xsigo IO Director CLI

set vhba vhba1.xsigotest2 rescan


Creating Xsigo vhba

add vhba <vhba-name>.<profile-name> <slot>/<port>

add vhba vhba.xsgioserverprofile 1/1