Exadata Storage Snapshots

This post describes how to implement Oracle Database Snapshot Technology on Exadata Machine.

Because Exadata Storage Cell Smart Features, Storage Indexes, IORM and Network Resource Manager work at level of ASM Volume Manager only, (and they don’t work on top of ACFS Cluster File System), the implementation of the snapshot technology is different compared to any other non-Exadata environment.

At this purpuse Oracle has developed a new type of ASM Disk Group called SPARSE Disk Group. It uses ASM SPARSE Grid Disk based on Thin Provisioning to save the database snapshot copies and the associated metadata, and it supports non-CDB and PDB snapshot copy.

The implementation requires the following minimal software versions :

  • Exadata Storage Software version 12.1.2.1.0.
  • Oracle Database version 12.1.0.2 with bundle patch 5.
One major restriction applies to Exadata Storage Sanpshot compared to ACFS;
the source database must be a shared copy open on read only and called Test Master. The Test Master Database can not be modified or deleted as long the latest child snapshot is in use.
This restriction exists because Exadata Snapshot technology uses “allocate on first write”, and not “copy on write” (like for ACFS), and the snapshot is per-database-datafile.
When a child snapshot issue a write, the write goes to a private copy of that block inside the snapshot, preserving the original block value which can be accessed by other child snapshots of the same Test Master.

How to Implement Exadata Storage Snapshots in a PDB Environment

Check the celldisks for available free space to allocate to a new SPARSE Disk Group

[root@strgceladm01 ~]# cellcli -e list celldisk attributes name,freespace
 CD_00_strgceladm01 853.34375G
 CD_01_strgceladm01 853.34375G
 CD_02_strgceladm01 853.34375G
 CD_03_strgceladm01 853.34375G
 CD_04_strgceladm01 853.34375G
 CD_05_strgceladm01 853.34375G
 CD_06_strgceladm01 853.34375G
 CD_07_strgceladm01 853.34375G
 CD_08_strgceladm01 853.34375G
 CD_09_strgceladm01 853.34375G
 CD_10_strgceladm01 853.34375G
 CD_11_strgceladm01 853.34375G
 FD_00_strgceladm01 0
 FD_01_strgceladm01 0
 FD_02_strgceladm01 0
 FD_03_strgceladm01 0
[root@strgceladm01 ~]#


[root@strgceladm02 ~]# cellcli -e list celldisk attributes name,freespace
 CD_00_strgceladm02 853.34375G
 CD_01_strgceladm02 853.34375G
 CD_02_strgceladm02 853.34375G
 CD_03_strgceladm02 853.34375G
 CD_04_strgceladm02 853.34375G
 CD_05_strgceladm02 853.34375G
 CD_06_strgceladm02 853.34375G
 CD_07_strgceladm02 853.34375G
 CD_08_strgceladm02 853.34375G
 CD_09_strgceladm02 853.34375G
 CD_10_strgceladm02 853.34375G
 CD_11_strgceladm02 853.34375G
 FD_00_strgceladm02 0
 FD_01_strgceladm02 0
 FD_02_strgceladm02 0
 FD_03_strgceladm02 0
[root@strgceladm02 ~]#


[root@strgceladm03 ~]# cellcli -e list celldisk attributes name,freespace
 CD_00_strgceladm03 853.34375G
 CD_01_strgceladm03 853.34375G
 CD_02_strgceladm03 853.34375G
 CD_03_strgceladm03 853.34375G
 CD_04_strgceladm03 853.34375G
 CD_05_strgceladm03 853.34375G
 CD_06_strgceladm03 853.34375G
 CD_07_strgceladm03 853.34375G
 CD_08_strgceladm03 853.34375G
 CD_09_strgceladm03 853.34375G
 CD_10_strgceladm03 853.34375G
 CD_11_strgceladm03 853.34375G
 FD_00_strgceladm03 0
 FD_01_strgceladm03 0
 FD_02_strgceladm03 0
 FD_03_strgceladm03 0
[root@strgceladm03 ~]#

For each Storage Cell Create a SPARSE Grid Disks as described below

[root@strgceladm01 ~]# cellcli -e CREATE GRIDDISK ALL PREFIX=SPARSE, sparse=true, SIZE=853.34375G
Cell disks were skipped because they had no freespace for grid disks: FD_00_strgceladm01, FD_01_strgceladm01, FD_02_strgceladm01, FD_03_strgceladm01.
GridDisk SPARSE_CD_00_strgceladm01 successfully created
GridDisk SPARSE_CD_01_strgceladm01 successfully created
GridDisk SPARSE_CD_02_strgceladm01 successfully created
GridDisk SPARSE_CD_03_strgceladm01 successfully created
GridDisk SPARSE_CD_04_strgceladm01 successfully created
GridDisk SPARSE_CD_05_strgceladm01 successfully created
GridDisk SPARSE_CD_06_strgceladm01 successfully created
GridDisk SPARSE_CD_07_strgceladm01 successfully created
GridDisk SPARSE_CD_08_strgceladm01 successfully created
GridDisk SPARSE_CD_09_strgceladm01 successfully created
GridDisk SPARSE_CD_10_strgceladm01 successfully created
GridDisk SPARSE_CD_11_strgceladm01 successfully created
[root@strgceladm01 ~]#

For each Storage Cell List all Grid Disks

[root@strgceladm01 ~]# cellcli -e list griddisk attributes name,size
 DATAC1_CD_00_strgceladm01 6.294586181640625T
 DATAC1_CD_01_strgceladm01 6.294586181640625T
 DATAC1_CD_02_strgceladm01 6.294586181640625T
 DATAC1_CD_03_strgceladm01 6.294586181640625T
 DATAC1_CD_04_strgceladm01 6.294586181640625T
 DATAC1_CD_05_strgceladm01 6.294586181640625T
 DATAC1_CD_06_strgceladm01 6.294586181640625T
 DATAC1_CD_07_strgceladm01 6.294586181640625T
 DATAC1_CD_08_strgceladm01 6.294586181640625T
 DATAC1_CD_09_strgceladm01 6.294586181640625T
 DATAC1_CD_10_strgceladm01 6.294586181640625T
 DATAC1_CD_11_strgceladm01 6.294586181640625T
 FGRID_FD_00_strgceladm01 2.0717315673828125T
 FGRID_FD_01_strgceladm01 2.0717315673828125T
 FGRID_FD_02_strgceladm01 2.0717315673828125T
 FGRID_FD_03_strgceladm01 2.0717315673828125T
 RECOC1_CD_00_strgceladm01 1.78143310546875T
 RECOC1_CD_01_strgceladm01 1.78143310546875T
 RECOC1_CD_02_strgceladm01 1.78143310546875T
 RECOC1_CD_03_strgceladm01 1.78143310546875T
 RECOC1_CD_04_strgceladm01 1.78143310546875T
 RECOC1_CD_05_strgceladm01 1.78143310546875T
 RECOC1_CD_06_strgceladm01 1.78143310546875T
 RECOC1_CD_07_strgceladm01 1.78143310546875T
 RECOC1_CD_08_strgceladm01 1.78143310546875T
 RECOC1_CD_09_strgceladm01 1.78143310546875T
 RECOC1_CD_10_strgceladm01 1.78143310546875T
 RECOC1_CD_11_strgceladm01 1.78143310546875T
 SPARSE_CD_00_strgceladm01 853.34375G
 SPARSE_CD_01_strgceladm01 853.34375G
 SPARSE_CD_02_strgceladm01 853.34375G
 SPARSE_CD_03_strgceladm01 853.34375G
 SPARSE_CD_04_strgceladm01 853.34375G
 SPARSE_CD_05_strgceladm01 853.34375G
 SPARSE_CD_06_strgceladm01 853.34375G
 SPARSE_CD_07_strgceladm01 853.34375G
 SPARSE_CD_08_strgceladm01 853.34375G
 SPARSE_CD_09_strgceladm01 853.34375G
 SPARSE_CD_10_strgceladm01 853.34375G
 SPARSE_CD_11_strgceladm01 853.34375G
[root@strgceladm01 ~]#

From ASM Instance Create a SPARSE Disk Group

SQL> CREATE DISKGROUP SPARSEC1 EXTERNAL REDUNDANCY DISK 'o/*/SPARSE_CD_*'
ATTRIBUTE
'compatible.asm' = '12.2.0.1',
'compatible.rdbms' = '12.2.0.1',
'cell.smart_scan_capable'='TRUE',
'cell.sparse_dg' = 'allsparse',
'AU_SIZE' = '4M';

Diskgroup created.

Set the following ASM attributes on the Disk Group hosting the Test Master Database

ALTER DISKGROUP DATAC1 SET ATTRIBUTE 'access_control.enabled' = 'true';

Grant access to the OS RDBMS user used to access to the Disk Group

ALTER DISKGROUP DATAC1 ADD USER 'oracle';

From an ASM Instance Set ownership permissions for every file that belongs solely to the PDB being snapped cloned as per example below

alter diskgroup DATAC1 set ownership owner='oracle' for file '+DATAC1/CDBT/<xxxxxxxxxxxxxxxxxxx>/DATAFILE/system.xxx.xxxxxxx';
alter diskgroup DATAC1 set ownership owner='oracle' for file '+DATAC1/CDBT/<xxxxxxxxxxxxxxxxxxx>/DATAFILE/sysaux.xxx.xxxxxxx';
alter diskgroup DATAC1 set ownership owner='oracle' for file '+DATAC1/CDBT/<xxxxxxxxxxxxxxxxxxx>/DATAFILE/users.xxx.xxxxxxx';
...
..

Restart the Master Test PDB in Read Only

alter pluggable database PDBTESTMASTER close immediate instances=all;
alter pluggable database PDBTESTMASTER open read only;

Create the first PDB Snapshot Copy on Exadata SPARSE Disk Group

Create pluggable database PDBDEV01 from PDBTESTMASTER tempfile reuse create_file_dest='+SPARSEC1' snapshot copy;

Feedback of the Exadata Storage Snapshots

The ability to create storage efficient database copies in a few seconds, independently from the size of the Test Master is very useful for today IT departments; but such extreme velocity and flexibility is not entirely free. In fact performance tests on a I/O bound workload have highlighted important performance degradation. This reminds us that as defined by Oracle Corporation, the Snapshot Technology, included on Exadata Machine remains a non-production option.

Troubleshooting not mounting ACFS File System

The ACSF /cloufs was created and registered on the CRS, following node reboot the file system was no longer mounting!

 

I logged on ASMCMD and checked the status of the ASM Volume:

[grid@rednodech07 ~]$ asmcmd
ASMCMD> volinfo -a
Diskgroup Name: FRA

 Volume Name: VOL_CLOUDFS
 Volume Device: ERROR
 State: DISABLED
 Size (MB): 20480
 Resize Unit (MB): 32
 Redundancy: MIRROR
 Stripe Columns: 4
 Stripe Width (K): 128
 Usage: ACFS
 Mountpath: /cloudfs

The output of the command above shows that the volume VOL_CLOUDFS is DISABLED. I tried manually  to restart it, but I got the following error:

ASMCMD> volenable -a
ORA-15032: not all alterations performed
ORA-15477: cannot communicate with the volume driver (DBD ERROR: OCIStmtExecute)
ASMCMD>

 

Then I checked if the ACFS kernel module where loaded into the Linux kernel:

  • oracleacfs (oracleacfs.ko): manages all ACFS filesystem operations.
  •  oracleavdm (oracleavdm.ko): AVDM module enabling direct interface with the filesystem
  • oracleoks (oracleoks.ko): provides memory management, lock and cluster synchronization
[root@rednodech07 ~]# /sbin/lsmod | grep oracle


Because the kernel modules were not loaded, I tried to manally load with the command

/bin/acfsload start

But it didn’t work, so I stopped the Grid Instrastruceure on the local node and I have reinstalled the ACFS drivers:

[root@rednodech07 asm]# /u01/GRID/11.2.0.4/bin/crsctl stop crs
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rednodech07'
CRS-2673: Attempting to stop 'ora.crsd' on 'rednodech07'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'rednodech07'
CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'rednodech07'
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN1.lsnr' on 'rednodech07'
CRS-2673: Attempting to stop 'ora.cvu' on 'rednodech07'
CRS-2673: Attempting to stop 'ora.oc4j' on 'rednodech07'
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN2.lsnr' on 'rednodech07'
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN3.lsnr' on 'rednodech07'
CRS-2673: Attempting to stop 'ora.GRID.dg' on 'rednodech07'
CRS-2673: Attempting to stop 'ora.tvdtst.db' on 'rednodech07'
CRS-2677: Stop of 'ora.cvu' on 'rednodech07' succeeded
CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'rednodech07' succeeded
CRS-2673: Attempting to stop 'ora.rednodech07.vip' on 'rednodech07'
CRS-2677: Stop of 'ora.LISTENER_SCAN1.lsnr' on 'rednodech07' succeeded
CRS-2673: Attempting to stop 'ora.scan1.vip' on 'rednodech07'
CRS-2677: Stop of 'ora.LISTENER_SCAN2.lsnr' on 'rednodech07' succeeded
CRS-2673: Attempting to stop 'ora.scan2.vip' on 'rednodech07'
CRS-2677: Stop of 'ora.LISTENER_SCAN3.lsnr' on 'rednodech07' succeeded
CRS-2673: Attempting to stop 'ora.scan3.vip' on 'rednodech07'
CRS-2677: Stop of 'ora.tvdtst.db' on 'rednodech07' succeeded
CRS-2673: Attempting to stop 'ora.DATA.dg' on 'rednodech07'
CRS-2673: Attempting to stop 'ora.FRA.dg' on 'rednodech07'
CRS-2677: Stop of 'ora.DATA.dg' on 'rednodech07' succeeded
CRS-2677: Stop of 'ora.FRA.dg' on 'rednodech07' succeeded
CRS-2677: Stop of 'ora.rednodech07.vip' on 'rednodech07' succeeded
CRS-2677: Stop of 'ora.scan1.vip' on 'rednodech07' succeeded
CRS-2677: Stop of 'ora.scan2.vip' on 'rednodech07' succeeded
CRS-2677: Stop of 'ora.oc4j' on 'rednodech07' succeeded
CRS-2677: Stop of 'ora.scan3.vip' on 'rednodech07' succeeded
CRS-2677: Stop of 'ora.GRID.dg' on 'rednodech07' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'rednodech07'
CRS-2677: Stop of 'ora.asm' on 'rednodech07' succeeded
CRS-2673: Attempting to stop 'ora.ons' on 'rednodech07'
CRS-2677: Stop of 'ora.ons' on 'rednodech07' succeeded
CRS-2673: Attempting to stop 'ora.net1.network' on 'rednodech07'
CRS-2677: Stop of 'ora.net1.network' on 'rednodech07' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'rednodech07' has completed
CRS-2677: Stop of 'ora.crsd' on 'rednodech07' succeeded
CRS-2673: Attempting to stop 'ora.crf' on 'rednodech07'
CRS-2673: Attempting to stop 'ora.ctssd' on 'rednodech07'
CRS-2673: Attempting to stop 'ora.evmd' on 'rednodech07'
CRS-2673: Attempting to stop 'ora.asm' on 'rednodech07'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'rednodech07'
CRS-2677: Stop of 'ora.crf' on 'rednodech07' succeeded
CRS-2677: Stop of 'ora.evmd' on 'rednodech07' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'rednodech07' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 'rednodech07' succeeded
CRS-2677: Stop of 'ora.asm' on 'rednodech07' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'rednodech07'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'rednodech07' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'rednodech07'
CRS-2677: Stop of 'ora.cssd' on 'rednodech07' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'rednodech07'
CRS-2677: Stop of 'ora.gipcd' on 'rednodech07' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'rednodech07'
CRS-2677: Stop of 'ora.gpnpd' on 'rednodech07' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rednodech07' has completed
CRS-4133: Oracle High Availability Services has been stopped.
[root@rednodech07 asm]#



[root@rednodech07 ~]# /u01/GRID/11.2.0.4/bin/acfsroot install
ACFS-9300: ADVM/ACFS distribution files found.
ACFS-9312: Existing ADVM/ACFS installation detected.
ACFS-9314: Removing previous ADVM/ACFS installation.
ACFS-9315: Previous ADVM/ACFS components successfully removed.
ACFS-9307: Installing requested ADVM/ACFS software.
ACFS-9308: Loading installed ADVM/ACFS drivers.
ACFS-9321: Creating udev for ADVM/ACFS.
ACFS-9323: Creating module dependencies - this may take some time.
ACFS-9154: Loading 'oracleoks.ko' driver.
ACFS-9154: Loading 'oracleadvm.ko' driver.
ACFS-9154: Loading 'oracleacfs.ko' driver.
ACFS-9327: Verifying ADVM/ACFS devices.
ACFS-9156: Detecting control device '/dev/asm/.asm_ctl_spec'.
ACFS-9156: Detecting control device '/dev/ofsctl'.
ACFS-9309: ADVM/ACFS installation correctness verified.
[root@rednodech07 ~]#


At this point I have restarted the Grid Infrastructure, the ACFS kernel modules got loaded  and the ASM Volume State become ENABLED.

[root@rednodech07 ~]# /u01/GRID/11.2.0.4/bin/crsctl start crs
CRS-4123: Oracle High Availability Services has been started.


[root@rednodech07 ~]# /sbin/lsmod | grep oracle
oracleacfs 1994567 0
oracleadvm 243254 0
oracleoks 460313 2 oracleacfs,oracleadvm

[grid@rednodech07 ~]$ asmcmd
ASMCMD> volinfo -a
Diskgroup Name: FRA

 Volume Name: VOL_CLOUDFS
 Volume Device: /dev/asm/vol_cloudfs-390
 State: ENABLED
 Size (MB): 20480
 Resize Unit (MB): 32
 Redundancy: MIRROR
 Stripe Columns: 4
 Stripe Width (K): 128
 Usage: ACFS
 Mountpath: /cloudfs

ASMCMD>

It remained to restart the ACFS File system with the following command:

[root@rednodech07 /]# /bin/mount -t acfs /dev/asm/vol_cloudfs-390 /cloudfs

[oracle@rednodech07 duplicate_tcswu]$ mount
/dev/mapper/vg_rednodech07-lv_root on / type ext4 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
tmpfs on /dev/shm type tmpfs (rw)
/dev/sdbc1 on /boot type ext4 (rw)
/dev/mapper/vg_rednodech07-lv_home on /home type ext4 (rw)
/dev/mapper/vg_rednodech07-lv_tmp on /tmp type ext4 (rw)
/dev/mapper/vg_rednodech07-lv_u01 on /u01 type ext4 (rw)
/dev/mapper/vg_rednodech07-lv_var on /var type ext4 (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
/dev/asm/vol_cloudfs-390 on /cloudfs type acfs (rw)


ASM Storage Reclamation Utility (ASRU) for HP 3PAR Thin Provisioning

 

ASM Storage Reclamation Utility (ASRU) reclaims storage from an ASM disk group that was previously allocated but is no longer in use. In example after decommissioning a database. This Perl script writes blocks of Zeros where space is currently unallocated; the Zeros blocks are interpreted by the 3PAR Storage Server, as physical space to reclaim.

The execution of the ASRU script consists in three sequential phases:

  1. Compaction the disks are logically resized keeping 25% of free space for future needs and without affecting the physical size of the disks. This operation triggers the ASM disk group rebalance which compact the data at the beginning of the disks.
  2. Deallocation this phase writes Zeros blocks above the current data High Water Mark, those blocks of Zeros are interpreted by the storage as space available for reclaiming.
  3. Expansion here the utility resize the logical disks to the original size, because data remains untouched no ASM rebalance operation is required.

 

How to use ASRU

ASM Disk Groups

 

ASMCMD> lsdg
State Type Rebal Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name
MOUNTED NORMAL N 512 4096 4194304 3071904 1220008 511984 354012 0 N DATA/
MOUNTED NORMAL N 512 4096 4194304 7167776 3631252 511984 1559634 0 N FRA/
MOUNTED HIGH N 512 4096 1048576 41886 40621 20448 6405 0 Y OCRVOTING/
ASMCMD>

——————————————————————
Invoke the ASRU utility wirh the Grid Infrastructure owner
——————————————————————

[grid@xxxxxxxx space_reclaim]$ bash ASRU DATA
Checking the system ...done
Calculating the sizes of the disks ...done
Writing the data to a file ...done
Resizing the disks...done
Calculating the sizes of the disks ...done

/u01/GRID/11.2.0.4/perl/bin/perl -I /u01/GRID/11.2.0.4/perl/lib/5.10.0 /cloudfs/space_reclaim/zerofill 7 /dev/mapper/asm500GB_360002ac0000000000000000c0000964bp1 385789 511984 /dev/mapper/asm500GB_360002ac000000000000000150000964cp1 385841 511984 /dev/mapper/asm500GB_360002ac000000000000000160000964cp1 385813 511984 /dev/mapper/asm500GB_360002ac000000000000000110000964bp1 385869 511984 /dev/mapper/asm500GB_360002ac000000000000000120000964bp1 385789 511984 /dev/mapper/asm500GB_360002ac000000000000000140000964cp1 385789 511984
126171+0 records in
126171+0 records out
132299882496 bytes (132 GB) copied, 519.831 s, 255 MB/s
126195+0 records in
126195+0 records out
132325048320 bytes (132 GB) copied, 519.927 s, 255 MB/s
126195+0 records in
126195+0 records out
132325048320 bytes (132 GB) copied, 520.045 s, 254 MB/s
126143+0 records in
126143+0 records out
132270522368 bytes (132 GB) copied, 520.064 s, 254 MB/s
126115+0 records in
126115+0 records out
132241162240 bytes (132 GB) copied, 520.076 s, 254 MB/s
126195+0 records in
126195+0 records out
132325048320 bytes (132 GB) copied, 520.174 s, 254 MB/s

Calculating the sizes of the disks ...done
Resizing the disks...done
Calculating the sizes of the disks ...done
Dropping the file ...done

 

The second phase of the script called Deallocation uses dd to reset to zero the blocks beyond the HWM. One dd process per ASM Disk is started:

[grid@xxxxxxxx space_reclaim]$ top
top - 10:13:02 up 44 days, 16:16, 4 users, load average: 16.63, 16.45, 13.75
Tasks: 732 total, 6 running, 726 sleeping, 0 stopped, 0 zombie
Cpu(s): 2.8%us, 13.8%sy, 0.0%ni, 37.1%id, 43.9%wa, 0.0%hi, 2.4%si, 0.0%st
Mem: 131998748k total, 131419200k used, 579548k free, 42266420k buffers
Swap: 16777212k total, 0k used, 16777212k free, 3394532k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
 101 root 20 0 0 0 0 R 39.4 0.0 8:38.60 kswapd0
20332 grid 20 0 103m 1564 572 R 19.5 0.0 1:46.35 dd
20333 grid 20 0 103m 1568 572 D 18.2 0.0 1:44.93 dd
20325 grid 20 0 103m 1568 572 D 17.2 0.0 1:44.53 dd
20324 grid 20 0 103m 1568 572 R 15.6 0.0 1:20.63 dd
20328 grid 20 0 103m 1564 572 R 15.2 0.0 1:21.55 dd
20331 grid 20 0 103m 1568 572 D 14.6 0.0 1:21.42 dd
26113 oracle 20 0 60.2g 32m 26m S 14.6 0.0 0:00.75 oracle
20335 root 20 0 0 0 0 D 14.2 0.0 1:18.94 flush-252:24
20322 grid 20 0 103m 1568 572 D 13.9 0.0 1:21.51 dd
20342 root 20 0 0 0 0 D 13.2 0.0 1:16.61 flush-252:25
20338 root 20 0 0 0 0 R 12.9 0.0 1:17.42 flush-252:30
20336 root 20 0 0 0 0 D 10.9 0.0 1:00.66 flush-252:55
20339 root 20 0 0 0 0 D 10.9 0.0 0:57.79 flush-252:50
20340 root 20 0 0 0 0 D 10.3 0.0 0:58.42 flush-252:54
20337 root 20 0 0 0 0 D 9.6 0.0 0:58.24 flush-252:60
24409 root RT 0 889m 96m 57m S 5.3 0.1 2570:35 osysmond.bin
24861 root 0 -20 0 0 0 S 1.7 0.0 41:31.95 kworker/1:1H
21086 root 0 -20 0 0 0 S 1.3 0.0 36:24.40 kworker/7

[grid@xxxxxxxxxx~]$ ps -ef|grep 20332
grid 20332 20326 17 10:02 pts/0 00:01:16 /bin/dd if=/dev/zero of=/dev/mapper/asm500GB_360002ac000000000000000110000964cp1 seek=315461 bs=1024k count=196523

[grid@xxxxxxxxxx ~]$ ps -ef|grep 20325
grid 20325 20319 17 10:02 pts/0 00:01:35 /bin/dd if=/dev/zero of=/dev/mapper/asm500GB_360002ac0000000000000000d0000964cp1 seek=315309 bs=1024k count=196675


 

——————————————————————
ASM I/O Statistics  during the disk group rebalance
——————————————————————

ASMCMD> lsop
Group_Name Dsk_Num State Power EST_WORK EST_RATE EST_TIME
DATA REBAL WAIT 7
ASMCMD>
ASMCMD> iostat -et 5
Group_Name Dsk_Name Reads Writes Read_Err Write_Err Read_Time Write_Time
DATA S1_DATA01_FG1 23030185984 2082245521408 0 0 629.202365 561627.214525
DATA S1_DATA02_FG1 9678848 2002875955200 0 0 141.271598 556226.65866
DATA S1_DATA03_FG1 101520732160 2016216610304 0 0 3024.887841 561404.578818
DATA S2_DATA01_FG1 819643435008 2062069520896 0 0 50319.400536 563116.826573
DATA S2_DATA02_FG1 1126678040576 2045156313600 0 0 56108.943316 555738.806255
DATA S2_DATA03_FG1 947842624000 1994103517696 0 0 51845.856561 545466.151177
FRA S1_FRA01_FG1 9695232 305258886144 0 0 251.129038 5234.922326
FRA S1_FRA02_FG1 9691136 324037302272 0 0 234.499119 5478.064898
FRA S1_FRA03_FG1 9674752 287679095808 0 0 237.140794 4322.92991
FRA S1_FRA04_FG1 9678848 279486220800 0 0 563.687636 3845.515979
FRA S1_FRA05_FG1 9687040 287006669312 0 0 236.97403 4162.291019
FRA S1_FRA06_FG1 9695232 305493610496 0 0 260.062194 4776.712435
FRA S1_FRA07_FG1 9691648 286196798976 0 0 236.804526 14257.967546
FRA S2_FRA01_FG1 28695552 282395977216 0 0 565.469092 3874.206606
FRA S2_FRA02_FG1 63110656 290152312832 0 0 622.124042 14264.906378
FRA S2_FRA03_FG1 10750508032 318696439808 0 0 214.440821 5200.272304
FRA S2_FRA04_FG1 102140928 311658688512 0 0 624.488925 5098.68159
FRA S2_FRA05_FG1 55187456 298768577536 0 0 587.286013 4398.231978
FRA S2_FRA06_FG1 33064960 289082719232 0 0 21.587277 4597.368455
FRA S2_FRA07_FG1 28070912 284403925504 0 0 568.334218 4320.709945
OCRVOTING S1_OCRVOTING01_FG1 9666560 4096 0 0 292.504971 .000388
OCRVOTING S1_OCRVOTING02_FG2 9674752 0 0 0 14.6555 0
OCRVOTING S2_OCRVOTING01_FG1 10866688 4096 0 0 99.140306 .000388
OCRVOTING S2_OCRVOTING02_FG2 9695232 4096 0 0 110.684821 .000388
OCRVOTING S3_OCRVOTING01_FG1 9666560 0 0 0 73.171492 0


Group_Name Dsk_Name Reads Writes Read_Err Write_Err Read_Time Write_Time
DATA S1_DATA01_FG1 1329561.60 51507.20 0.00 0.00 0.13 0.01
DATA S1_DATA02_FG1 773324.80 417792.00 0.00 0.00 0.14 0.03
DATA S1_DATA03_FG1 1255014.40 11468.80 0.00 0.00 0.18 0.00
DATA S2_DATA01_FG1 0.00 5734.40 0.00 0.00 0.00 0.00
DATA S2_DATA02_FG1 32768.00 30208.00 0.00 0.00 0.00 0.02
DATA S2_DATA03_FG1 0.00 416972.80 0.00 0.00 0.00 0.01
FRA S1_FRA01_FG1 0.00 6553.60 0.00 0.00 0.00 0.00
FRA S1_FRA02_FG1 3276.80 10649.60 0.00 0.00 0.00 0.00
FRA S1_FRA03_FG1 0.00 0.00 0.00 0.00 0.00 0.00
FRA S1_FRA04_FG1 0.00 3276.80 0.00 0.00 0.00 0.00
FRA S1_FRA05_FG1 0.00 0.00 0.00 0.00 0.00 0.00
FRA S1_FRA06_FG1 0.00 3276.80 0.00 0.00 0.00 0.00
FRA S1_FRA07_FG1 0.00 4812.80 0.00 0.00 0.00 0.00
FRA S2_FRA01_FG1 0.00 819.20 0.00 0.00 0.00 0.00
FRA S2_FRA02_FG1 0.00 3276.80 0.00 0.00 0.00 0.00
FRA S2_FRA03_FG1 0.00 6553.60 0.00 0.00 0.00 0.00
FRA S2_FRA04_FG1 0.00 6553.60 0.00 0.00 0.00 0.00
FRA S2_FRA05_FG1 0.00 3276.80 0.00 0.00 0.00 0.00
FRA S2_FRA06_FG1 0.00 4812.80 0.00 0.00 0.00 0.00
FRA S2_FRA07_FG1 0.00 3276.80 0.00 0.00 0.00 0.00
OCRVOTING S1_OCRVOTING01_FG1 0.00 819.20 0.00 0.00 0.00 0.60
OCRVOTING S1_OCRVOTING02_FG2 0.00 819.20 0.00 0.00 0.00 0.60
OCRVOTING S2_OCRVOTING01_FG1 0.00 819.20 0.00 0.00 0.00 0.60
OCRVOTING S2_OCRVOTING02_FG2 0.00 819.20 0.00 0.00 0.00 0.60
OCRVOTING S3_OCRVOTING01_FG1 0.00 819.20 0.00 0.00 0.00 0.0


Group_Name Dsk_Name Reads Writes Read_Err Write_Err Read_Time Write_Time
DATA S1_DATA01_FG1 77004.80 248217.60 0.00 0.00 0.01 0.01
DATA S1_DATA02_FG1 6553.60 819.20 0.00 0.00 0.01 0.60
DATA S1_DATA03_FG1 83558.40 11468.80 0.00 0.00 0.01 0.00
DATA S2_DATA01_FG1 0.00 235110.40 0.00 0.00 0.00 0.01
DATA S2_DATA02_FG1 36044.80 17203.20 0.00 0.00 0.00 0.60
DATA S2_DATA03_FG1 0.00 8192.00 0.00 0.00 0.00 0.00
FRA S1_FRA01_FG1 0.00 6553.60 0.00 0.00 0.00 0.00
FRA S1_FRA02_FG1 3276.80 11468.80 0.00 0.00 0.00 0.01
FRA S1_FRA03_FG1 0.00 233472.00 0.00 0.00 0.00 0.01
FRA S1_FRA04_FG1 0.00 0.00 0.00 0.00 0.00 0.00
FRA S1_FRA05_FG1 0.00 0.00 0.00 0.00 0.00 0.00
FRA S1_FRA06_FG1 0.00 6553.60 0.00 0.00 0.00 0.00
FRA S1_FRA07_FG1 0.00 0.00 0.00 0.00 0.00 0.00
FRA S2_FRA01_FG1 0.00 1638.40 0.00 0.00 0.00 0.01
FRA S2_FRA02_FG1 0.00 0.00 0.00 0.00 0.00 0.00
FRA S2_FRA03_FG1 0.00 9830.40 0.00 0.00 0.00 0.00
FRA S2_FRA04_FG1 0.00 6553.60 0.00 0.00 0.00 0.00
FRA S2_FRA05_FG1 0.00 6553.60 0.00 0.00 0.00 0.00
FRA S2_FRA06_FG1 0.00 0.00 0.00 0.00 0.00 0.00
FRA S2_FRA07_FG1 0.00 233472.00 0.00 0.00 0.00 0.01
OCRVOTING S1_OCRVOTING01_FG1 0.00 1638.40 0.00 0.00 0.00 1.20
OCRVOTING S1_OCRVOTING02_FG2 0.00 1638.40 0.00 0.00 0.00 1.20
OCRVOTING S2_OCRVOTING01_FG1 0.00 1638.40 0.00 0.00 0.00 1.20
OCRVOTING S2_OCRVOTING02_FG2 0.00 1638.40 0.00 0.00 0.00 1.20
OCRVOTING S3_OCRVOTING01_FG1 0.00 1638.40 0.00 0.00 0.00 0.01

——————————————————————
ASM Alert Log produced during the execution of the ASRU utility
——————————————————————

Mon Apr 04 09:11:39 2016
SQL> ALTER DISKGROUP DATA RESIZE DISK S2_DATA03_FG1 SIZE 385840M DISK S1_DATA01_FG1 SIZE 385788M DISK S2_DATA02_FG1 SIZE 385812M DISK S1_DATA02_FG1 SIZE 385868M DISK S2_DATA01_FG1 SIZE 385788M DISK S1_DATA03_FG1 SIZE 385788M REBALANCE WAIT/* ASRU */
NOTE: GroupBlock outside rolling migration privileged region
NOTE: requesting all-instance membership refresh for group=1
Mon Apr 04 09:12:11 2016
NOTE: membership refresh pending for group 1/0x48695261 (DATA)
Mon Apr 04 09:12:12 2016
GMON querying group 1 at 10 for pid 18, osid 25195
SUCCESS: refreshed membership for 1/0x48695261 (DATA)
NOTE: Attempting voting file refresh on diskgroup DATA
NOTE: Refresh completed on diskgroup DATA. No voting file found.
NOTE: starting rebalance of group 1/0x48695261 (DATA) at power 7
Starting background process ARB0
Mon Apr 04 09:12:15 2016
ARB0 started with pid=41, OS id=46711
NOTE: assigning ARB0 to group 1/0x48695261 (DATA) with 7 parallel I/Os
cellip.ora not found.
Mon Apr 04 09:13:38 2016
NOTE: stopping process ARB0
SUCCESS: rebalance completed for group 1/0x48695261 (DATA)
Mon Apr 04 09:13:39 2016
NOTE: GroupBlock outside rolling migration privileged region
NOTE: requesting all-instance membership refresh for group=1
Mon Apr 04 09:13:42 2016
GMON updating for reconfiguration, group 1 at 11 for pid 41, osid 47334
NOTE: group 1 PST updated.
SUCCESS: disk S1_DATA01_FG1 resized to 96447 AUs
SUCCESS: disk S1_DATA02_FG1 resized to 96467 AUs
SUCCESS: disk S2_DATA01_FG1 resized to 96447 AUs
SUCCESS: disk S2_DATA02_FG1 resized to 96453 AUs
SUCCESS: disk S2_DATA03_FG1 resized to 96460 AUs
SUCCESS: disk S1_DATA03_FG1 resized to 96447 AUs
NOTE: resizing header on grp 1 disk S1_DATA01_FG1
NOTE: resizing header on grp 1 disk S1_DATA02_FG1
NOTE: resizing header on grp 1 disk S2_DATA01_FG1
NOTE: resizing header on grp 1 disk S2_DATA02_FG1
NOTE: resizing header on grp 1 disk S2_DATA03_FG1
NOTE: resizing header on grp 1 disk S1_DATA03_FG1
NOTE: membership refresh pending for group 1/0x48695261 (DATA)
GMON querying group 1 at 12 for pid 18, osid 25195
SUCCESS: refreshed membership for 1/0x48695261 (DATA)
Mon Apr 04 09:13:48 2016
NOTE: Attempting voting file refresh on diskgroup DATA
NOTE: Refresh completed on diskgroup DATA. No voting file found.
Mon Apr 04 09:13:49 2016
SUCCESS: ALTER DISKGROUP DATA RESIZE DISK S2_DATA03_FG1 SIZE 385840M DISK S1_DATA01_FG1 SIZE 385788M DISK S2_DATA02_FG1 SIZE 385812M DISK S1_DATA02_FG1 SIZE 385868M DISK S2_DATA01_FG1 SIZE 385788M DISK S1_DATA03_FG1 SIZE 385788M REBALANCE WAIT/* ASRU */
Mon Apr 04 09:22:42 2016
SQL> ALTER DISKGROUP DATA RESIZE DISK S2_DATA03_FG1 SIZE 511984M DISK S1_DATA01_FG1 SIZE 511984M DISK S2_DATA02_FG1 SIZE 511984M DISK S1_DATA02_FG1 SIZE 511984M DISK S2_DATA01_FG1 SIZE 511984M DISK S1_DATA03_FG1 SIZE 511984M REBALANCE WAIT/* ASRU */
NOTE: GroupBlock outside rolling migration privileged region
NOTE: requesting all-instance membership refresh for group=1
NOTE: requesting all-instance disk validation for group=1
Mon Apr 04 09:22:46 2016
NOTE: disk validation pending for group 1/0x48695261 (DATA)
SUCCESS: validated disks for 1/0x48695261 (DATA)
Mon Apr 04 09:23:24 2016
NOTE: increased size in header on grp 1 disk S1_DATA01_FG1
NOTE: increased size in header on grp 1 disk S1_DATA02_FG1
NOTE: increased size in header on grp 1 disk S2_DATA01_FG1
NOTE: increased size in header on grp 1 disk S2_DATA02_FG1
NOTE: increased size in header on grp 1 disk S2_DATA03_FG1
NOTE: increased size in header on grp 1 disk S1_DATA03_FG1
Mon Apr 04 09:23:24 2016
NOTE: membership refresh pending for group 1/0x48695261 (DATA)
Mon Apr 04 09:23:26 2016
GMON querying group 1 at 13 for pid 18, osid 25195
SUCCESS: refreshed membership for 1/0x48695261 (DATA)
NOTE: starting rebalance of group 1/0x48695261 (DATA) at power 7
Starting background process ARB0
Mon Apr 04 09:23:26 2016
ARB0 started with pid=38, OS id=53105
NOTE: assigning ARB0 to group 1/0x48695261 (DATA) with 7 parallel I/Os
cellip.ora not found.
NOTE: Attempting voting file refresh on diskgroup DATA
NOTE: Refresh completed on diskgroup DATA. No voting file found.
Mon Apr 04 09:23:37 2016
NOTE: stopping process ARB0
SUCCESS: rebalance completed for group 1/0x48695261 (DATA)
Mon Apr 04 09:23:38 2016
NOTE: GroupBlock outside rolling migration privileged region
NOTE: requesting all-instance membership refresh for group=1
NOTE: membership refresh pending for group 1/0x48695261 (DATA)
Mon Apr 04 09:23:44 2016
GMON querying group 1 at 14 for pid 18, osid 25195
SUCCESS: refreshed membership for 1/0x48695261 (DATA)
Mon Apr 04 09:23:47 2016
NOTE: Attempting voting file refresh on diskgroup DATA
NOTE: Refresh completed on diskgroup DATA. No voting file found.
Mon Apr 04 09:23:48 2016
SUCCESS: ALTER DISKGROUP DATA RESIZE DISK S2_DATA03_FG1 SIZE 511984M DISK S1_DATA01_FG1 SIZE 511984M DISK S2_DATA02_FG1 SIZE 511984M DISK S1_DATA02_FG1 SIZE 511984M DISK S2_DATA01_FG1 SIZE 511984M DISK S1_DATA03_FG1 SIZE 511984M REBALANCE WAIT/* ASRU */
Mon Apr 04 09:23:50 2016
SQL> /* ASRU */alter diskgroup DATA drop file '+DATA/tpfile'
SUCCESS: /* ASRU */alter diskgroup DATA drop file '+DATA/tpfile'



Once the ASRU utility has completed, the Storage Administrator should invoke the Space Compact from the 3Par console.

ASM 11gR2 Create ACFS Cluster FS

#####################################################
##           Step by step how to create Oracle ACFS Cluster Filesystem       ##
#####################################################

[grid@lnxcld02 trace]$ asmcmd


  Type "help [command]" to get help on a specific ASMCMD command.

        commands:
        --------

        md_backup, md_restore

        lsattr, setattr

        cd, cp, du, find, help, ls, lsct, lsdg, lsof, mkalias
        mkdir, pwd, rm, rmalias

        chdg, chkdg, dropdg, iostat, lsdsk, lsod, mkdg, mount
        offline, online, rebal, remap, umount

        dsget, dsset, lsop, shutdown, spbackup, spcopy, spget
        spmove, spset, startup

        chtmpl, lstmpl, mktmpl, rmtmpl

        chgrp, chmod, chown, groups, grpmod, lsgrp, lspwusr, lsusr
        mkgrp, mkusr, orapwusr, passwd, rmgrp, rmusr

        volcreate, voldelete, voldisable, volenable, volinfo
        volresize, volset, volstat


ASMCMD>     
ASMCMD> volcreate -G FRA1 -s 5G Vol_ACFS01
ASMCMD> volinfo -a
Diskgroup Name: FRA1

         Volume Name: VOL_ACFS01
         Volume Device: /dev/asm/vol_acfs01-199
         State: ENABLED
         Size (MB): 5120
         Resize Unit (MB): 32
         Redundancy: UNPROT
         Stripe Columns: 4
         Stripe Width (K): 128
         Usage:
         Mountpath:

ASMCMD> volenable -a
ASMCMD>
ASMCMD> exit


[grid@lnxcld02 trace]$ acfsdriverstate version
ACFS-9325:     Driver OS kernel version = 2.6.18-8.el5(i386).
ACFS-9326:     Driver Oracle version = 110803.1.
[grid@lnxcld02 trace]$ acfsdriverstate loaded
ACFS-9203: true



SQL> SELECT volume_name, volume_device FROM V$ASM_VOLUME;

VOLUME_NAME                    VOLUME_DEVICE
------------------------------ ----------------------------------------
VOL_ACFS01                     /dev/asm/vol_acfs01-199

1 row selected.

---------------------------------------------------------------------------------

[root@lnxcld02 adump]# ls -la /dev/asm/vol_acfs01-199
brwxrwx--- 1 root asmadmin 252, 101889 Nov  1 20:03 /dev/asm/vol_acfs01-199

[root@lnxcld02 adump]# mkdir /cloud_FS
[root@lnxcld01 adump]# mkdir /cloud_FS


[root@lnxcld02 adump]# mkfs -t acfs /dev/asm/vol_acfs01-199
mkfs.acfs: version                   = 11.2.0.3.0
mkfs.acfs: on-disk version           = 39.0
mkfs.acfs: volume                    = /dev/asm/vol_acfs01-199
mkfs.acfs: volume size               = 5368709120
mkfs.acfs: Format complete.


[root@lnxcld02 adump]# acfsutil registry -a -f /dev/asm/vol_acfs01-199 /cloud_FS
acfsutil registry: mount point /cloud_FS successfully added to Oracle Registry


[root@lnxcld02 adump]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/hda1              11G  3.6G  6.0G  38% /
/dev/hdb1              12G  7.2G  3.9G  66% /home
tmpfs                 1.5G  634M  867M  43% /dev/shm
Oracle_Software       293G  180G  114G  62% /media/sf_Oracle_Software
/dev/hdc               40G   18G    22  45% /u01
/dev/asm/vol_acfs01-199
                      5.0G   75M  5.0G   2% /cloud_FS

                      
                      
SQL> select * from v$asm_volume;

GROUP_NUMBER VOLUME_NAME                    COMPOUND_INDEX    SIZE_MB VOLUME_NUMBER REDUND STRIPE_COLUMNS STRIPE_WIDTH_K STATE            FILE_NUMBER
------------ ------------------------------ -------------- ---------- ------------- ------ -------------- -------------- ---------------- -----------
INCARNATION DRL_FILE_NUMBER RESIZE_UNIT_MB USAGE                          VOLUME_DEVICE                            MOUNTPATH
----------- --------------- -------------- ------------------------------ ---------------------------------------- --------------------
           2 VOL_ACFS01                           33554433       5120             1 UNPROT              4            128 ENABLED                  270   
766094623               0             32    ACFS                           /dev/asm/vol_acfs01-199                  /cloud_FS


1 row selected.

	

ASM Commands

################################################################
# Adding/Removing/Managing ASM instances
################################################################

--Use the following syntax to add configuration information about an existing ASM instance:
 srvctl add asm -n node_name -i +asm_instance_name -o oracle_home

--Use the following syntax to remove an ASM instance:
 srvctl remove asm -n node_name [-i +asm_instance_name]

--Use the following syntax to enable an ASM instance:
 srvctl enable asm -n node_name [-i ] +asm_instance_name

--Use the following syntax to disable an ASM instance:
 srvctl disable asm -n node_name [-i +asm_instance_name]

--Use the following syntax to start an ASM instance:
 srvctl start asm -n node_name [-i +asm_instance_name] [-o start_options]

--Use the following syntax to stop an ASM instance:
 srvctl stop asm -n node_name [-i +asm_instance_name] [-o stop_options]

--Use the following syntax to show the configuration of an ASM instance:
 srvctl config asm -n node_name

--Use the following syntax to obtain the status of an ASM instance:
 srvctl status asm -n node_name

P.S.:

For all of the SRVCTL commands in this section for which an option is not required, if the instance name “-i” is not specified the command applies  to all ASM instances.

 

###################################
# Managing DiskGroup inside ASM:
###################################

–Note that adding or dropping disks will initiate a rebalance of the data on the disks.
–The status of these processes can be shown by selecting from v$asm_operation.

--Quering ASM Disk Groups
 col name format a25
 col DATABASE_COMPATIBILITY format a10
 col COMPATIBILITY format a10
 select * from v$asm_diskgroup;
 --or
 select name, state, type, total_mb, free_mb from v$asm_diskgroup;
--Quering ASM Disks
 col PATH format a55
 col name format a25
 select name, path, group_number, TOTAL_MB, FREE_MB, READS, WRITES, READ_TIME,
 WRITE_TIME from v$asm_disk order by 3,1;
 --or
 col PATH format a50
 col HEADER_STATUS  format a12
 col name format a25
 --select INCARNATION,
 select name, path, MOUNT_STATUS,HEADER_STATUS, MODE_STATUS, STATE, group_number,
 OS_MB, TOTAL_MB, FREE_MB, READS, WRITES, READ_TIME, WRITE_TIME, BYTES_READ,
 BYTES_WRITTEN, REPAIR_TIMER, MOUNT_DATE, CREATE_DATE from v$asm_disk;

 

 

################################################################
# Tuning and Analysis
################################################################

–Performance Statistics

–N.B Time in Hundred seconds!

 col READ_TIME format 9999999999.99
 col WRITE_TIME format 9999999999.99
 col BYTES_READ format 99999999999999.99
 col BYTES_WRITTEN  format 99999999999999.99
 select name, STATE, group_number, TOTAL_MB, FREE_MB,READS, WRITES, READ_TIME,
 WRITE_TIME, BYTES_READ, BYTES_WRITTEN, REPAIR_TIMER,MOUNT_DATE
 from v$asm_disk order by group_number, name;

 

--Check the Num of Extents in use per Disk inside one Disk Group.
 select max(substr(name,1,30)) group_name, count(PXN_KFFXP) extents_per_disk,
 DISK_KFFXP, GROUP_KFFXP from x$kffxp, v$ASM_DISKGROUP gr
 where GROUP_KFFXP=&group_nr and GROUP_KFFXP=GROUP_NUMBER
 group by GROUP_KFFXP, DISK_KFFXP order by GROUP_KFFXP, DISK_KFFXP;

--Find The File distribution Between Disks
 SELECT * FROM v$asm_alias  WHERE  name='PWX_DATA.272.669293645';

SELECT GROUP_KFFXP Group#,DISK_KFFXP Disk#,AU_KFFXP AU#,XNUM_KFFXP Extent#
 FROM   X$KFFXP WHERE  number_kffxp=(SELECT file_number FROM v$asm_alias
 WHERE name='PWX_DATA.272.669293645');

--or

SELECT GROUP_KFFXP Group#,DISK_KFFXP Disk#,AU_KFFXP AU#,XNUM_KFFXP Extent#
 FROM X$KFFXP WHERE  number_kffxp=&DataFile_Number;

--or
 select d.name, XV.GROUP_KFFXP Group#, XV.DISK_KFFXP Disk#,
 XV.NUMBER_KFFXP File_Number, XV.AU_KFFXP AU#, XV.XNUM_KFFXP Extent#,
 XV.ADDR, XV.INDX, XV.INST_ID, XV.COMPOUND_KFFXP, XV.INCARN_KFFXP,
 XV.PXN_KFFXP, XV.XNUM_KFFXP,XV.LXN_KFFXP, XV.FLAGS_KFFXP,
 XV.CHK_KFFXP, XV.SIZE_KFFXP from v$asm_disk d, X$KFFXP XV
 where d.GROUP_NUMBER=XV.GROUP_KFFXP and d.DISK_NUMBER=XV.DISK_KFFXP
 and number_kffxp=&File_NUM order by 2,3,4;

--List the hierarchical tree of files stored in the diskgroup
 SELECT concat('+'||gname, sys_connect_by_path(aname, '/')) full_alias_path FROM
 (SELECT g.name gname, a.parent_index pindex, a.name aname,
 a.reference_index rindex FROM v$asm_alias a, v$asm_diskgroup g
 WHERE a.group_number = g.group_number)
 START WITH (mod(pindex, power(2, 24))) = 0
 CONNECT BY PRIOR rindex = pindex;

 

 

###################################
#Create and Modify Disk Group
###################################

create diskgroup FRA1 external redundancy disk '/dev/vx/rdsk/oraASMdg/fra1'
 ATTRIBUTE 'compatible.rdbms' = '11.1', 'compatible.asm' = '11.1';

alter diskgroup FRA1  check all;

--on +ASM2 :
 alter diskgroup FRA1 mount;

--Add a second disk:
 alter diskgroup FRA1 add disk '/dev/vx/rdsk/oraASMdg/fra2';

--Add several disks with a wildcard:
 alter diskgroup FRA1 add disk '/dev/vx/rdsk/oraASMdg/fra*';

--Remove a disk from a diskgroup:
 alter diskgroup FRA1 drop disk 'FRA1_0002';

--Drop the entire DiskGroup
 drop diskgroup DATA1 including contents;

--How to DROP the entire DiskGroup when it is in NOMOUNT Status
 --Generate the dd command which will reset the header of all the
 --disks belong the GROUP_NUMBER=0!!!!
 select 'dd if=/dev/zero of=''' ||PATH||''' bs=8192 count=100' from v$asm_disk
 where GROUP_NUMBER=0;

select * from v$asm_operation;

————————————————————————–

alter diskgroup FRA1 drop disk 'FRA1_0002';
 alter diskgroup FRA1 add disk '/dev/vx/rdsk/fra1dg/fra3';

alter diskgroup FRA1 drop disk 'FRA1_0003';
 alter diskgroup FRA1 add disk '/dev/vx/rdsk/fra1dg/fra4';

 

When a new diskgroup is created, it is only mounted on the local instance,
and only the instance-specific entry for the asm_diskgroups parameter is updated.
By manually mounting the diskgroup on other instances, the asm_diskgroups parameter on those instances are updated.

--on +ASM1 :
 create diskgroup FRA1 external redundancy disk '/dev/vx/rdsk/fradg/fra1'
 ATTRIBUTE 'compatible.rdbms' = '11.1', 'compatible.asm' = '11.1';

--on +ASM2 :
 alter diskgroup FRA1 mount;

--It works even for on going balances!!!
 alter diskgroup DATA1 rebalance power 10;

 

################################################################
# New ASM Command Line Utility (ASMCMD) Commands and Options
################################################################

ASMCMD Command Reference:

Command Description
 --------------------
 - cd Command Changes the current directory to the specified directory.
 - cp Command Enables you to copy files between ASM disk groups on a local instance and remote instances.
 - du Command Displays the total disk space occupied by ASM files in the specified
 - ASM directory and all of its subdirectories, recursively.
 - exit Command Exits ASMCMD.
 - find Command Lists the paths of all occurrences of the specified name (with wildcards) under the specified directory.
 - help Command Displays the syntax and description of ASMCMD commands.
 - ls Command Lists the contents of an ASM directory, the attributes of the specified
 - file, or the names and attributes of all disk groups.
 - lsct Command Lists information about current ASM clients.
 - lsdg Command Lists all disk groups and their attributes.
 - lsdsk Command Lists disks visible to ASM.
 - md_backup Command Creates a backup of all of the mounted disk groups.
 - md_restore Command Restores disk groups from a backup.
 - mkalias Command Creates an alias for system-generated filenames.
 - mkdir Command Creates ASM directories.
 - pwd Command Displays the path of the current ASM directory.
 - remap Command Repairs a range of physical blocks on a disk.
 - rm Command Deletes the specified ASM files or directories.
 - rmalias Command Deletes the specified alias, retaining the file that the alias points to.

--------
 -- kfed tool From Unix Prompt for reading ASM disk header.
 kfed read /dev/vx/rdsk/fra1dg/fra1

 

################################################################
# CREATE and Manage Tablespaces and Datafiles on ASM
################################################################

CREATE TABLESPACE my_ts DATAFILE '+disk_group_1' SIZE 100M AUTOEXTEND ON;

ALTER TABLESPACE sysaux ADD DATAFILE '+disk_group_1' SIZE 100M;

ALTER DATABASE DATAFILE '+DATA1/dbname/datafile/audit.259.668957419' RESIZE 150M;
-------------------------
 create diskgroup DATA1 external redundancy disk '/dev/vx/rdsk/oraASMdg/fra1'
 ATTRIBUTE 'compatible.rdbms' = '11.1', 'compatible.asm' = '11.1';

select 'alter diskgroup DATA1 add disk ''' || PATH || ''';' from v$asm_disk
 where GROUP_NUMBER=0 and rownum<=&Num_Disks_to_add;

select 'alter diskgroup FRA1 add disk ''' || PATH || ''';' from v$asm_disk
 where GROUP_NUMBER=0 and rownum<=&Num_Disks_to_add;

--Remove ASM header
 select 'dd if=/dev/zero of=''' ||PATH||''' bs=8192 count=100' from v$asm_disk
 where GROUP_NUMBER=0;