Sunday, 16 October 2011

Deleting File system, Removing LV, Reduce/Remove VG

Some time we don’t use some of the mount point and we want to remove that file system and use that space on some other server. To reclaim the disk from the server use the Plan of action explained below:
Unmount file system (FS)
Remove Logical Volume (LV)
Reduce/Remove Volume Group (VG)
Remove Physical Volume (PV) from LVM.

Umount file system (FS)
 To make sure nobody is using file system, use the fuser command.
# fuser –cu /data
/data:   28212c(user1)
u: Display the login user name in parentheses following each process ID.
c: Display the use of a mount point and any file beneath that  mount point.  Each file must be a file system mount point.

If u receives the above result then it means user1 is using this mount point, so check with user1, when he is going to finish his job one done you can process with the procedure.
To free up busy mount point you can use fuser command with k option:
 k :- Send the SIGKILL signal to each process using each file.
#fuser -kuc /data
unmount file system
 # umount /data
Cross check with #bdf command.
Remove entry form /etc/fstab file for /data file system
Remove Logical Volume (LV)
To remove LV use lvremove command, when prompted, press y to remove the volume.
 #lvremove /dev/vg03/lvol1
The logical volume "/dev/vg03/lvol1" is not empty;
do you really want to delete the logical volume (y/n) : y
Logical volume "/dev/vg03/lvol1" has been successfully removed.
Volume Group configuration for /dev/vg03 has been saved in /etc/lvmconf/vg03.conf

If the logical volume contains data, LVM prompts you for confirmation. You can use the -f flag to remove the logical volume without a confirmation request.
#lvremove -f /dev/vg03/lvol1


Reducing/Removing volume group

To remove a PV from a volume group, first verify that none of the physical extents (PE) on the disk are currently in use. Use the pvdisplay command to determine:
#pvdisplay -v /dev/vg03/disk42
--- Physical volumes ---
PE Size (Mbytes)                        8
Total PE                                       255
Free PE                                        250

--- Physical extents ---
PE              Status            LV                             LE
00001    current      /dev/vg03/lvol2               00001
00002    current      /dev/vg03/lvol2               00002
00003    current      /dev/vg03/lvol2               00003
00240    free                                                  00000
00241    free                                                  00000
00242    free                                                  00000

From the output we get that some PE is used by some other LV. So you first need to move these PE to other disk PE in the volume group.
#pvmove /dev/disk/disk42 /dev/disk/disk41
Physical volume "/dev/disk/disk42" has been successfully moved.
Volume Group configuration for /dev/vg03 has been saved in /etc/lvmconf/vg03.conf

Here you are moving disk 42 PE to the disk 41 and you can run the pvmove command while file systems are mounted and active.
Now you can reduce volume group as no part of disk 42 is in use.
#vgreduce /dev/vg03 /dev/disk/disk42
Volume group "vg03" has been successfully reduced.
Volume Group configuration for /dev/vg03 has been saved in /etc/lvmconf/vg03.conf

Verify if there is only one disk remaining in the volume group. We cannot remove the volume group if there is more than one physical disk in the volume group.
# vgdisplay -v vg03 |grep "PV Name"
PV Name         /dev/disk/disk41

 Remove the volume group vg03.
# vgremove vg03
Volume group "vg03" has been successfully removed.

Remove the directory and group DSFs for the volume group. Press y when prompted.
# rm -ir /dev/vg03
directory /dev/vg03: ? (y/n) y
/dev/vg03/group: ? (y/n) y
/dev/vg03: ? (y/n) y

Remove Physical Volume (PV) From LVM
Remove the LVM headers from both of the disks using the pvremove command.
# pvremove /dev/rdisk/disk42
The physical volume associated with "/dev/rdisk/disk42" has been removed.

Saturday, 8 October 2011

Extend Volume Group, Logical Volume and File System


If extension of file system is needed then you need to check for space in LV, if sufficient free space is not there in that LV then you need to check for the space in VG, if it also not have  sufficient free space then you need to add new physical Volume(PV). If you system have space at any step you can proceed from that point onwards. Here in below scenario we imagine there in no sufficient free space in LV and VG.

 Plan of action
Ø  Identify new disk and create PV
Ø  Extend Volume group.
Ø  Extend Logical Volume.
Ø  Extend File System.

Identify new disk and create PV

(for details click here)
Detect new LUN on the server
#ioscan -nfNC disk
 
To find which disks are not used in the LVM
#pvdisplay -l /dev/disk/*

Verify size of disk
#diskinfo  /dev/rdisk/disk42

 Initializing PV before using in LVM
# pvcreate /dev/rdisk/disk42


Extend Volume group

Add the physical volume in the volume group using the vgextend command.
# vgextend /dev/vg03 /dev/disk/disk42



Volume group "/dev/vg03" has been successfully extended.

Volume Group configuration for /dev/vg03 has been saved in /etc/lvmconf/vg03.conf

Verify that the volume group is successfully extended using the vgdisplay command.
It gives complete information about VG, LV, and PV
#vgdisplay -v /dev/vg03

You can also use grep to get useful information only like below:
# vgdisplay -v /dev/vg03 |grep "PV Name"



PV Name /dev/disk/disk41

PV Name /dev/disk/disk42

 Now VG have to free space so now we can extend LV

Extend Logical Volume

A logical volume can span multiple physical volumes, but it cannot span multiple volume groups. If you don't specify where LVM should allocate the new extents, LVM simply uses the first available extents in the volume group.

Extend lvol1 up to 900GB without specifying PV
# lvextend -L 1924  /dev/vg03/lvol1



Logical volume "/dev/vg03/lvol1" has been successfully extended.

Volume Group configuration for /dev/vg01 has been saved in /etc/lvmconf/vg03.conf

 Extending lvol1to 900GB with specifying PV
#lvextend -L 1924 /dev/vg03/lovl1 /dev/disk/disk42

If you not have enough space on VG then you may get the following error:


lvextend: Not enough free physical extents available.

Logical volume "/dev/vg03/lvol1" could not be extended.

Failure possibly caused by contiguous allocation policy.

Failure possibly caused by strict allocation policy
Verify the volume size by using the lvdisplay command.
# lvdisplay -v /dev/vg03/lvol1

It gives us complete information about LV, Distribution of logical volume (in PV) and LE.
We can use grep command to get the use full information only:
#lvdisplay -v /dev/vg03/lovl1 | grep “LV Size (Mbytes)”



LV Size (Mbytes)            1924

 Now it is verified that LV have sufficient space so now you can extend file system

Extend File System

Simply extending a logical volume does not make the new space available for use by the file system in that volume. The new space in the volume cannot be used until the file system’s superblock and other metadata structures have been notified that new space is available.

Extending file system without online JFS:
We would need to unmount the /data file system
#umount /data

Extend the file system using the extendfs file command.
#extendfs -F vxfs -s 115200 /dev/vg03/rlvol1

Here F: - File system type
         s :- Specifies the number of DEV_BSIZE blocks to be added to the file system. 

If size is not specified, the maximum possible size is used.
# extendfs /dev/vg03/rlvol1

You can check system block size by below command:
#fstyp -v /dev/vg03/lvol1 | grep f_bsize



f_bsize: 8192

 OR
#df -g /dev/vg03/lvol1 |grep "file system block size"



8192 file system block size            1024 fragment size

From here we get 1block size = 8192KB = 8MB
Re-mount the /data file system.
# mount /data

With OnlineJFS you do not need to umount. Use fsadm instead:
# fsadm -b <new size in KB> <mountpoint>

#fsadm -b 943718400 /data
Verify you have the OnlineJFS software by using the command
# swlist | grep JFS

 Check the file system extension using the bdf command.
# bdf | grep /dev/vg03/lvol1



Filesystem                 kbytes           used                avail             %used Mounted on

/dev/vg01/oracle   943718400   283115520     660612880   30%        /data

Friday, 7 October 2011

Creating a Mount Point after Presenting a LUN from Storage


1.  Identify newly added LUN.
2.  Create Physical volume(PV)
3.  Create Volume Group(VG)
4.  Create Logical Volume(LV)
5.  Create File system
6.  Mount File system
7.  Entries in /etc/fstab file.

Here are the Explanations with commands to perform above plan of action:

Identify newly added LUN

Create a LUN from storage and present to the server on which you want to assign a new mount point. To detect new LUN on the server use below command, it will show you all disks presented to the server till now.

#ioscan –fnNC disk





Here:
f:- Generate a full listing, displaying the module's class, instance number, hardware path, driver,  software state, hardware type, and a brief description.
n:- Generate a full listing, displaying the module's class, instance number, hardware path, driver, software state, hardware type, and a brief description.
C:- strict the output listing to those devices belonging to the specified class
N:- Display the agile view of the system hardware.

Below command shows the difference between persistent DFS and Legacy DSF. In next steps we are going to use persistent DSF.

#ioscan –m dsf

Persistent DSF           Legacy DSF(s)
========================================
/dev/pt/pt4                         /dev/rscsi/c0t0d0
                                          /dev/rscsi/c2t0d0
                                          /dev/rscsi/c4t0d0
                                          /dev/rscsi/c6t0d0
/dev/rdisk/disk41             /dev/rdsk/c1t0d0
                                          /dev/rdsk/c3t0d0                
                                          /dev/rdsk/c5t0d0
                                          /dev/rdsk/c7t0d0
/dev/rdisk/disk42              /dev/rdsk/c1t0d1
                                          /dev/rdsk/c3t0d1
                                          /dev/rdsk/c5t0d1
                                          /dev/rdsk/c7t0d1

 To find which disks are not used in the LVM.
 #pvdisplay –l  /dev/disk/*

/dev/disk/disk41:LVM_Disk=no
/dev/disk/disk42:LVM_Disk=yes
/dev/disk/disk43:LVM_Disk=yes
/dev/disk/disk44:LVM_Disk=yes
/dev/disk/disk45:LVM_Disk=yes

From the above output we are able to find disk41 is not used in LVM. So we proceed with disk41. And cross check with the size of disk.

#diskinfo /dev/rdisk/disk41

SCSI describe of /dev/rdisk/disk41:
             vendor: HP
         product id: OPEN-V
               type: direct access
               size: 56691712 Kbytes
   bytes per sector: 512

Output suggests that it is the same size of disk for which we are looking for. So proceed to next step.

Create Physical volume(PV)

A disk has to be initialized before LVM can use it.

#pvcreate /dev/rdisk/disk41

Physical volume "/dev/rdisk/disk41" has been successfully created.

 If disk41 already initialized before then you will get below error message

# pvcreate: The Physical Volume already belongs to a Volume Group

If you are sure the disk is free you can force the initialization using the -f option:

#pvcreate –f /dev/rdisk/disk41

Create Volume Group(VG)


Select a unique minor number for the VG:

# ll /dev/*/group

crw-r--r-- 1 root sys 64 0x000000 Apr 4 2010 /dev/vg00/group
crw-r--r-- 1 root sys 64 0x010000 Oct 26 15:52 /dev/vg01/group
crw-r--r-- 1 root sys 64 0x020000 Aug 2 15:49 /dev/vg02/group

Create the VG control file (group file):

# mkdir /dev/vg03

# mknod /dev/vg03/group c 64 0x030000

Create the VG
#vgcreate  -s 256 /dev/vg03 /dev/disk/disk41

Volume group "/dev/vg03" has been successfully created.
Volume Group configuration for /dev/vg03 has been saved in /etc/lvmconf/vg03.conf

 s: Size of physical extend(PE) in MB.

If you have 2 or more PV to add in a VG, you can add them in one go, just adding next to disk41 with a space.
#vgcreate –s 256 /dev/vg03 /dev/disk/disk41 /dev/disk/disk40

To display VG information

#vgdisplay  -v /dev/vg03

--- Volume groups ---
VG Name                           /dev/vg03
VG Write Access                read/write
VG Status                           available
Max LV                             255
Cur LV                              1
Open LV                           1
Max PV                             16
Cur PV                              1
Act PV                              1
Max PE per PV                1727
VGDA                               2
PE Size (Mbytes)             256
Total PE                           216
Alloc PE                           0
Free PE                            216
Total PVG                         0
Total Spare PVs                0
Total Spare PVs in use      0
VG Version                       1.0
VG Max Size                    6908g
VG Max Extents               27632

Create Logical Volume(LV)

To create a LV from a VG (option: L- assigns Size in MB; l - Assigns size in Number of PE, n – assigns name to LV)

# lvcreate  -L 55040 –n /dev/vg03/lvol1 /dev/vg03

Logical volume "/dev/vg03/lvol1" has been successfully created with character device "/dev/vg03/lvol1"
Logical volume "/dev/vg03/lvol1" has been successfully extended.
Volume Group configuration for /dev/vg03 has been saved in /etc/lvmconf/vg03.conf

 To display LV information

# lvdisplay -v /dev/vg03/lvol1

--- Logical volumes ---
LV Name                          /dev/vg03/lvol1
VG Name                         /dev/vg03
LV Permission                 read/write
LV Status                          available/syncd
Mirror copies                    0
Consistency Recovery     MWC
Schedule                           parallel
LV Size (Mbytes)             55040
Current LE                        215
Allocated PE                     215
Stripes                               0
Stripe Size (Kbytes)          0
Bad block                          on
Allocation                         strict
IO Timeout (Seconds)      default

Create File system

 You can use newfs to put a FS onto the LV:

# newfs  -F vxfs /dev/vg03/rlvol1

F: - File system type either hfs or vxfs. Nowadays it is always recommended to use a VxFS (=JFS) filesystem.

Mount File system

Mounting created File System

#mkdir /data


#mount /dev/vg03/lvol1 /data

Use the bdf command to see the mounted file systems

#bdf

Entries in /etc/fstab file

Make entries in /etc/fstab file to make mount point permanent between reboots. You can do this with below command or open this file with vi editor and add entries at the end.

# echo “/dev/vg03/lvol1  /data vxfs defaults 0 2” >> /etc/fstab


#vi /etc/fstab

# System /etc/fstab file.  Static information about the file systems
# See fstab(4) and sam(1M) for further details on configuring devices.
/dev/vg00/lvol3 / vxfs delaylog 0 1
/dev/vg00/lvol1 /stand vxfs tranflush 0 1
/dev/vg00/lvol4 /home vxfs delaylog 0 2
/dev/vg00/lvol5 /opt vxfs delaylog 0 2
/dev/vg00/lvol6 /tmp vxfs delaylog 0 2
/dev/vg00/lvol7 /var vxfs delaylog 0 2
/dev/vg00/lvol8 /usr vxfs delaylog 0 2
/dev/vg03/lvol1 /data vxfs defaults 0 2