Bug on Oracle 12c Multitenant & PDB Clone as Snapshot Copy

While automating the refresh of the test databases on Oracle 12c Multitenant environment with ACFS and PDB snapshot copy, I encountered the following BUG:

The column SNAPSHOT_PARENT_CON_ID of the view V$PDBS shows 0 (zero) in case of PDBs created as Snapshot Copy.

This bug prevents to identify the parent-child relationship between a PDB and its own Snapshots Copies.

The test case below explains the problem:

SQL> CREATE PLUGGABLE DATABASE LARTE3SEFU from LARTE3 SNAPSHOT COPY; 
 
 Pluggable database created. 
 
 SQL> select CON_ID, NAME, OPEN_MODE, SNAPSHOT_PARENT_CON_ID from v$pdbs where NAME in ('LARTE3SEFU','LARTE3'); 
 
 CON_ID      NAME          OPEN_MODE  SNAPSHOT_PARENT_CON_ID 
 ---------- -------------- ---------- ---------------------- 
 5          LARTE3         READ ONLY  0 
 16         LARTE3SEFU     MOUNTED    0  <-- This should be 5
 
 2 rows selected. 

A Service Request to Oracle has been opened, I’ll update this post once I have the official answer.

Update from the Service Request: BUG Fixed on version 12.2

Advertisements

ASM Storage Reclamation Utility (ASRU) for HP 3PAR Thin Provisioning

 

ASM Storage Reclamation Utility (ASRU) reclaims storage from an ASM disk group that was previously allocated but is no longer in use. In example after decommissioning a database. This Perl script writes blocks of Zeros where space is currently unallocated; the Zeros blocks are interpreted by the 3PAR Storage Server, as physical space to reclaim.

The execution of the ASRU script consists in three sequential phases:

  1. Compaction the disks are logically resized keeping 25% of free space for future needs and without affecting the physical size of the disks. This operation triggers the ASM disk group rebalance which compact the data at the beginning of the disks.
  2. Deallocation this phase writes Zeros blocks above the current data High Water Mark, those blocks of Zeros are interpreted by the storage as space available for reclaiming.
  3. Expansion here the utility resize the logical disks to the original size, because data remains untouched no ASM rebalance operation is required.

 

How to use ASRU

ASM Disk Groups

 

ASMCMD> lsdg
State Type Rebal Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name
MOUNTED NORMAL N 512 4096 4194304 3071904 1220008 511984 354012 0 N DATA/
MOUNTED NORMAL N 512 4096 4194304 7167776 3631252 511984 1559634 0 N FRA/
MOUNTED HIGH N 512 4096 1048576 41886 40621 20448 6405 0 Y OCRVOTING/
ASMCMD>

——————————————————————
Invoke the ASRU utility wirh the Grid Infrastructure owner
——————————————————————

[grid@xxxxxxxx space_reclaim]$ bash ASRU DATA
Checking the system ...done
Calculating the sizes of the disks ...done
Writing the data to a file ...done
Resizing the disks...done
Calculating the sizes of the disks ...done

/u01/GRID/11.2.0.4/perl/bin/perl -I /u01/GRID/11.2.0.4/perl/lib/5.10.0 /cloudfs/space_reclaim/zerofill 7 /dev/mapper/asm500GB_360002ac0000000000000000c0000964bp1 385789 511984 /dev/mapper/asm500GB_360002ac000000000000000150000964cp1 385841 511984 /dev/mapper/asm500GB_360002ac000000000000000160000964cp1 385813 511984 /dev/mapper/asm500GB_360002ac000000000000000110000964bp1 385869 511984 /dev/mapper/asm500GB_360002ac000000000000000120000964bp1 385789 511984 /dev/mapper/asm500GB_360002ac000000000000000140000964cp1 385789 511984
126171+0 records in
126171+0 records out
132299882496 bytes (132 GB) copied, 519.831 s, 255 MB/s
126195+0 records in
126195+0 records out
132325048320 bytes (132 GB) copied, 519.927 s, 255 MB/s
126195+0 records in
126195+0 records out
132325048320 bytes (132 GB) copied, 520.045 s, 254 MB/s
126143+0 records in
126143+0 records out
132270522368 bytes (132 GB) copied, 520.064 s, 254 MB/s
126115+0 records in
126115+0 records out
132241162240 bytes (132 GB) copied, 520.076 s, 254 MB/s
126195+0 records in
126195+0 records out
132325048320 bytes (132 GB) copied, 520.174 s, 254 MB/s

Calculating the sizes of the disks ...done
Resizing the disks...done
Calculating the sizes of the disks ...done
Dropping the file ...done

 

The second phase of the script called Deallocation uses dd to reset to zero the blocks beyond the HWM. One dd process per ASM Disk is started:

[grid@xxxxxxxx space_reclaim]$ top
top - 10:13:02 up 44 days, 16:16, 4 users, load average: 16.63, 16.45, 13.75
Tasks: 732 total, 6 running, 726 sleeping, 0 stopped, 0 zombie
Cpu(s): 2.8%us, 13.8%sy, 0.0%ni, 37.1%id, 43.9%wa, 0.0%hi, 2.4%si, 0.0%st
Mem: 131998748k total, 131419200k used, 579548k free, 42266420k buffers
Swap: 16777212k total, 0k used, 16777212k free, 3394532k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
 101 root 20 0 0 0 0 R 39.4 0.0 8:38.60 kswapd0
20332 grid 20 0 103m 1564 572 R 19.5 0.0 1:46.35 dd
20333 grid 20 0 103m 1568 572 D 18.2 0.0 1:44.93 dd
20325 grid 20 0 103m 1568 572 D 17.2 0.0 1:44.53 dd
20324 grid 20 0 103m 1568 572 R 15.6 0.0 1:20.63 dd
20328 grid 20 0 103m 1564 572 R 15.2 0.0 1:21.55 dd
20331 grid 20 0 103m 1568 572 D 14.6 0.0 1:21.42 dd
26113 oracle 20 0 60.2g 32m 26m S 14.6 0.0 0:00.75 oracle
20335 root 20 0 0 0 0 D 14.2 0.0 1:18.94 flush-252:24
20322 grid 20 0 103m 1568 572 D 13.9 0.0 1:21.51 dd
20342 root 20 0 0 0 0 D 13.2 0.0 1:16.61 flush-252:25
20338 root 20 0 0 0 0 R 12.9 0.0 1:17.42 flush-252:30
20336 root 20 0 0 0 0 D 10.9 0.0 1:00.66 flush-252:55
20339 root 20 0 0 0 0 D 10.9 0.0 0:57.79 flush-252:50
20340 root 20 0 0 0 0 D 10.3 0.0 0:58.42 flush-252:54
20337 root 20 0 0 0 0 D 9.6 0.0 0:58.24 flush-252:60
24409 root RT 0 889m 96m 57m S 5.3 0.1 2570:35 osysmond.bin
24861 root 0 -20 0 0 0 S 1.7 0.0 41:31.95 kworker/1:1H
21086 root 0 -20 0 0 0 S 1.3 0.0 36:24.40 kworker/7

[grid@xxxxxxxxxx~]$ ps -ef|grep 20332
grid 20332 20326 17 10:02 pts/0 00:01:16 /bin/dd if=/dev/zero of=/dev/mapper/asm500GB_360002ac000000000000000110000964cp1 seek=315461 bs=1024k count=196523

[grid@xxxxxxxxxx ~]$ ps -ef|grep 20325
grid 20325 20319 17 10:02 pts/0 00:01:35 /bin/dd if=/dev/zero of=/dev/mapper/asm500GB_360002ac0000000000000000d0000964cp1 seek=315309 bs=1024k count=196675


 

——————————————————————
ASM I/O Statistics  during the disk group rebalance
——————————————————————

ASMCMD> lsop
Group_Name Dsk_Num State Power EST_WORK EST_RATE EST_TIME
DATA REBAL WAIT 7
ASMCMD>
ASMCMD> iostat -et 5
Group_Name Dsk_Name Reads Writes Read_Err Write_Err Read_Time Write_Time
DATA S1_DATA01_FG1 23030185984 2082245521408 0 0 629.202365 561627.214525
DATA S1_DATA02_FG1 9678848 2002875955200 0 0 141.271598 556226.65866
DATA S1_DATA03_FG1 101520732160 2016216610304 0 0 3024.887841 561404.578818
DATA S2_DATA01_FG1 819643435008 2062069520896 0 0 50319.400536 563116.826573
DATA S2_DATA02_FG1 1126678040576 2045156313600 0 0 56108.943316 555738.806255
DATA S2_DATA03_FG1 947842624000 1994103517696 0 0 51845.856561 545466.151177
FRA S1_FRA01_FG1 9695232 305258886144 0 0 251.129038 5234.922326
FRA S1_FRA02_FG1 9691136 324037302272 0 0 234.499119 5478.064898
FRA S1_FRA03_FG1 9674752 287679095808 0 0 237.140794 4322.92991
FRA S1_FRA04_FG1 9678848 279486220800 0 0 563.687636 3845.515979
FRA S1_FRA05_FG1 9687040 287006669312 0 0 236.97403 4162.291019
FRA S1_FRA06_FG1 9695232 305493610496 0 0 260.062194 4776.712435
FRA S1_FRA07_FG1 9691648 286196798976 0 0 236.804526 14257.967546
FRA S2_FRA01_FG1 28695552 282395977216 0 0 565.469092 3874.206606
FRA S2_FRA02_FG1 63110656 290152312832 0 0 622.124042 14264.906378
FRA S2_FRA03_FG1 10750508032 318696439808 0 0 214.440821 5200.272304
FRA S2_FRA04_FG1 102140928 311658688512 0 0 624.488925 5098.68159
FRA S2_FRA05_FG1 55187456 298768577536 0 0 587.286013 4398.231978
FRA S2_FRA06_FG1 33064960 289082719232 0 0 21.587277 4597.368455
FRA S2_FRA07_FG1 28070912 284403925504 0 0 568.334218 4320.709945
OCRVOTING S1_OCRVOTING01_FG1 9666560 4096 0 0 292.504971 .000388
OCRVOTING S1_OCRVOTING02_FG2 9674752 0 0 0 14.6555 0
OCRVOTING S2_OCRVOTING01_FG1 10866688 4096 0 0 99.140306 .000388
OCRVOTING S2_OCRVOTING02_FG2 9695232 4096 0 0 110.684821 .000388
OCRVOTING S3_OCRVOTING01_FG1 9666560 0 0 0 73.171492 0


Group_Name Dsk_Name Reads Writes Read_Err Write_Err Read_Time Write_Time
DATA S1_DATA01_FG1 1329561.60 51507.20 0.00 0.00 0.13 0.01
DATA S1_DATA02_FG1 773324.80 417792.00 0.00 0.00 0.14 0.03
DATA S1_DATA03_FG1 1255014.40 11468.80 0.00 0.00 0.18 0.00
DATA S2_DATA01_FG1 0.00 5734.40 0.00 0.00 0.00 0.00
DATA S2_DATA02_FG1 32768.00 30208.00 0.00 0.00 0.00 0.02
DATA S2_DATA03_FG1 0.00 416972.80 0.00 0.00 0.00 0.01
FRA S1_FRA01_FG1 0.00 6553.60 0.00 0.00 0.00 0.00
FRA S1_FRA02_FG1 3276.80 10649.60 0.00 0.00 0.00 0.00
FRA S1_FRA03_FG1 0.00 0.00 0.00 0.00 0.00 0.00
FRA S1_FRA04_FG1 0.00 3276.80 0.00 0.00 0.00 0.00
FRA S1_FRA05_FG1 0.00 0.00 0.00 0.00 0.00 0.00
FRA S1_FRA06_FG1 0.00 3276.80 0.00 0.00 0.00 0.00
FRA S1_FRA07_FG1 0.00 4812.80 0.00 0.00 0.00 0.00
FRA S2_FRA01_FG1 0.00 819.20 0.00 0.00 0.00 0.00
FRA S2_FRA02_FG1 0.00 3276.80 0.00 0.00 0.00 0.00
FRA S2_FRA03_FG1 0.00 6553.60 0.00 0.00 0.00 0.00
FRA S2_FRA04_FG1 0.00 6553.60 0.00 0.00 0.00 0.00
FRA S2_FRA05_FG1 0.00 3276.80 0.00 0.00 0.00 0.00
FRA S2_FRA06_FG1 0.00 4812.80 0.00 0.00 0.00 0.00
FRA S2_FRA07_FG1 0.00 3276.80 0.00 0.00 0.00 0.00
OCRVOTING S1_OCRVOTING01_FG1 0.00 819.20 0.00 0.00 0.00 0.60
OCRVOTING S1_OCRVOTING02_FG2 0.00 819.20 0.00 0.00 0.00 0.60
OCRVOTING S2_OCRVOTING01_FG1 0.00 819.20 0.00 0.00 0.00 0.60
OCRVOTING S2_OCRVOTING02_FG2 0.00 819.20 0.00 0.00 0.00 0.60
OCRVOTING S3_OCRVOTING01_FG1 0.00 819.20 0.00 0.00 0.00 0.0


Group_Name Dsk_Name Reads Writes Read_Err Write_Err Read_Time Write_Time
DATA S1_DATA01_FG1 77004.80 248217.60 0.00 0.00 0.01 0.01
DATA S1_DATA02_FG1 6553.60 819.20 0.00 0.00 0.01 0.60
DATA S1_DATA03_FG1 83558.40 11468.80 0.00 0.00 0.01 0.00
DATA S2_DATA01_FG1 0.00 235110.40 0.00 0.00 0.00 0.01
DATA S2_DATA02_FG1 36044.80 17203.20 0.00 0.00 0.00 0.60
DATA S2_DATA03_FG1 0.00 8192.00 0.00 0.00 0.00 0.00
FRA S1_FRA01_FG1 0.00 6553.60 0.00 0.00 0.00 0.00
FRA S1_FRA02_FG1 3276.80 11468.80 0.00 0.00 0.00 0.01
FRA S1_FRA03_FG1 0.00 233472.00 0.00 0.00 0.00 0.01
FRA S1_FRA04_FG1 0.00 0.00 0.00 0.00 0.00 0.00
FRA S1_FRA05_FG1 0.00 0.00 0.00 0.00 0.00 0.00
FRA S1_FRA06_FG1 0.00 6553.60 0.00 0.00 0.00 0.00
FRA S1_FRA07_FG1 0.00 0.00 0.00 0.00 0.00 0.00
FRA S2_FRA01_FG1 0.00 1638.40 0.00 0.00 0.00 0.01
FRA S2_FRA02_FG1 0.00 0.00 0.00 0.00 0.00 0.00
FRA S2_FRA03_FG1 0.00 9830.40 0.00 0.00 0.00 0.00
FRA S2_FRA04_FG1 0.00 6553.60 0.00 0.00 0.00 0.00
FRA S2_FRA05_FG1 0.00 6553.60 0.00 0.00 0.00 0.00
FRA S2_FRA06_FG1 0.00 0.00 0.00 0.00 0.00 0.00
FRA S2_FRA07_FG1 0.00 233472.00 0.00 0.00 0.00 0.01
OCRVOTING S1_OCRVOTING01_FG1 0.00 1638.40 0.00 0.00 0.00 1.20
OCRVOTING S1_OCRVOTING02_FG2 0.00 1638.40 0.00 0.00 0.00 1.20
OCRVOTING S2_OCRVOTING01_FG1 0.00 1638.40 0.00 0.00 0.00 1.20
OCRVOTING S2_OCRVOTING02_FG2 0.00 1638.40 0.00 0.00 0.00 1.20
OCRVOTING S3_OCRVOTING01_FG1 0.00 1638.40 0.00 0.00 0.00 0.01

——————————————————————
ASM Alert Log produced during the execution of the ASRU utility
——————————————————————

Mon Apr 04 09:11:39 2016
SQL> ALTER DISKGROUP DATA RESIZE DISK S2_DATA03_FG1 SIZE 385840M DISK S1_DATA01_FG1 SIZE 385788M DISK S2_DATA02_FG1 SIZE 385812M DISK S1_DATA02_FG1 SIZE 385868M DISK S2_DATA01_FG1 SIZE 385788M DISK S1_DATA03_FG1 SIZE 385788M REBALANCE WAIT/* ASRU */
NOTE: GroupBlock outside rolling migration privileged region
NOTE: requesting all-instance membership refresh for group=1
Mon Apr 04 09:12:11 2016
NOTE: membership refresh pending for group 1/0x48695261 (DATA)
Mon Apr 04 09:12:12 2016
GMON querying group 1 at 10 for pid 18, osid 25195
SUCCESS: refreshed membership for 1/0x48695261 (DATA)
NOTE: Attempting voting file refresh on diskgroup DATA
NOTE: Refresh completed on diskgroup DATA. No voting file found.
NOTE: starting rebalance of group 1/0x48695261 (DATA) at power 7
Starting background process ARB0
Mon Apr 04 09:12:15 2016
ARB0 started with pid=41, OS id=46711
NOTE: assigning ARB0 to group 1/0x48695261 (DATA) with 7 parallel I/Os
cellip.ora not found.
Mon Apr 04 09:13:38 2016
NOTE: stopping process ARB0
SUCCESS: rebalance completed for group 1/0x48695261 (DATA)
Mon Apr 04 09:13:39 2016
NOTE: GroupBlock outside rolling migration privileged region
NOTE: requesting all-instance membership refresh for group=1
Mon Apr 04 09:13:42 2016
GMON updating for reconfiguration, group 1 at 11 for pid 41, osid 47334
NOTE: group 1 PST updated.
SUCCESS: disk S1_DATA01_FG1 resized to 96447 AUs
SUCCESS: disk S1_DATA02_FG1 resized to 96467 AUs
SUCCESS: disk S2_DATA01_FG1 resized to 96447 AUs
SUCCESS: disk S2_DATA02_FG1 resized to 96453 AUs
SUCCESS: disk S2_DATA03_FG1 resized to 96460 AUs
SUCCESS: disk S1_DATA03_FG1 resized to 96447 AUs
NOTE: resizing header on grp 1 disk S1_DATA01_FG1
NOTE: resizing header on grp 1 disk S1_DATA02_FG1
NOTE: resizing header on grp 1 disk S2_DATA01_FG1
NOTE: resizing header on grp 1 disk S2_DATA02_FG1
NOTE: resizing header on grp 1 disk S2_DATA03_FG1
NOTE: resizing header on grp 1 disk S1_DATA03_FG1
NOTE: membership refresh pending for group 1/0x48695261 (DATA)
GMON querying group 1 at 12 for pid 18, osid 25195
SUCCESS: refreshed membership for 1/0x48695261 (DATA)
Mon Apr 04 09:13:48 2016
NOTE: Attempting voting file refresh on diskgroup DATA
NOTE: Refresh completed on diskgroup DATA. No voting file found.
Mon Apr 04 09:13:49 2016
SUCCESS: ALTER DISKGROUP DATA RESIZE DISK S2_DATA03_FG1 SIZE 385840M DISK S1_DATA01_FG1 SIZE 385788M DISK S2_DATA02_FG1 SIZE 385812M DISK S1_DATA02_FG1 SIZE 385868M DISK S2_DATA01_FG1 SIZE 385788M DISK S1_DATA03_FG1 SIZE 385788M REBALANCE WAIT/* ASRU */
Mon Apr 04 09:22:42 2016
SQL> ALTER DISKGROUP DATA RESIZE DISK S2_DATA03_FG1 SIZE 511984M DISK S1_DATA01_FG1 SIZE 511984M DISK S2_DATA02_FG1 SIZE 511984M DISK S1_DATA02_FG1 SIZE 511984M DISK S2_DATA01_FG1 SIZE 511984M DISK S1_DATA03_FG1 SIZE 511984M REBALANCE WAIT/* ASRU */
NOTE: GroupBlock outside rolling migration privileged region
NOTE: requesting all-instance membership refresh for group=1
NOTE: requesting all-instance disk validation for group=1
Mon Apr 04 09:22:46 2016
NOTE: disk validation pending for group 1/0x48695261 (DATA)
SUCCESS: validated disks for 1/0x48695261 (DATA)
Mon Apr 04 09:23:24 2016
NOTE: increased size in header on grp 1 disk S1_DATA01_FG1
NOTE: increased size in header on grp 1 disk S1_DATA02_FG1
NOTE: increased size in header on grp 1 disk S2_DATA01_FG1
NOTE: increased size in header on grp 1 disk S2_DATA02_FG1
NOTE: increased size in header on grp 1 disk S2_DATA03_FG1
NOTE: increased size in header on grp 1 disk S1_DATA03_FG1
Mon Apr 04 09:23:24 2016
NOTE: membership refresh pending for group 1/0x48695261 (DATA)
Mon Apr 04 09:23:26 2016
GMON querying group 1 at 13 for pid 18, osid 25195
SUCCESS: refreshed membership for 1/0x48695261 (DATA)
NOTE: starting rebalance of group 1/0x48695261 (DATA) at power 7
Starting background process ARB0
Mon Apr 04 09:23:26 2016
ARB0 started with pid=38, OS id=53105
NOTE: assigning ARB0 to group 1/0x48695261 (DATA) with 7 parallel I/Os
cellip.ora not found.
NOTE: Attempting voting file refresh on diskgroup DATA
NOTE: Refresh completed on diskgroup DATA. No voting file found.
Mon Apr 04 09:23:37 2016
NOTE: stopping process ARB0
SUCCESS: rebalance completed for group 1/0x48695261 (DATA)
Mon Apr 04 09:23:38 2016
NOTE: GroupBlock outside rolling migration privileged region
NOTE: requesting all-instance membership refresh for group=1
NOTE: membership refresh pending for group 1/0x48695261 (DATA)
Mon Apr 04 09:23:44 2016
GMON querying group 1 at 14 for pid 18, osid 25195
SUCCESS: refreshed membership for 1/0x48695261 (DATA)
Mon Apr 04 09:23:47 2016
NOTE: Attempting voting file refresh on diskgroup DATA
NOTE: Refresh completed on diskgroup DATA. No voting file found.
Mon Apr 04 09:23:48 2016
SUCCESS: ALTER DISKGROUP DATA RESIZE DISK S2_DATA03_FG1 SIZE 511984M DISK S1_DATA01_FG1 SIZE 511984M DISK S2_DATA02_FG1 SIZE 511984M DISK S1_DATA02_FG1 SIZE 511984M DISK S2_DATA01_FG1 SIZE 511984M DISK S1_DATA03_FG1 SIZE 511984M REBALANCE WAIT/* ASRU */
Mon Apr 04 09:23:50 2016
SQL> /* ASRU */alter diskgroup DATA drop file '+DATA/tpfile'
SUCCESS: /* ASRU */alter diskgroup DATA drop file '+DATA/tpfile'



Once the ASRU utility has completed, the Storage Administrator should invoke the Space Compact from the 3Par console.

Patching Exadata Machine

################################################################
##    EXADATA MACHINE  INFRASTRUCTURE PATCHING of 1/8 RACK     ##
################################################################

This post describe step-by-step how to patch the infrastructure components of an Exadata Machine

———————————————————–
— Cell Storage Pre-requisites
———————————————————–

--Stop CRS using dcli
[root@ch01db01 oracle]# dcli -g /home/oracle/dbhosts -l root '/u01/app/12.1.0.2/grid/bin/crsctl stop crs'
 [root@ch01db01 oracle]# dcli -g /home/oracle/dbhosts -l root '/u01/app/12.1.0.2/grid/bin/crsctl stat res -t -init'
ch01db01: CRS-4639: Could not contact Oracle High Availability Services
ch01db01: CRS-4000: Command Status failed, or completed with errors.
ch01db02: CRS-4639: Could not contact Oracle High Availability Services
ch01db02: CRS-4000: Command Status failed, or completed with errors.
--Stop All Cell Storage Services
 [root@ch01db01 oracle]# dcli -g /home/oracle/cellhosts_ALL -l root "cellcli -e alter cell shutdown services all"
ch01celadm01:
ch01celadm01: Stopping the RS, CELLSRV, and MS services...
 ch01celadm01: The SHUTDOWN of services was successful.
 ch01celadm02:
 ch01celadm02: Stopping the RS, CELLSRV, and MS services...
 ch01celadm02: The SHUTDOWN of services was successful.
 ch01celadm03:
 ch01celadm03: Stopping the RS, CELLSRV, and MS services...
 ch01celadm03: The SHUTDOWN of services was successful.

[root@ch01db01 oracle]#

 

———————————————————–
–Cell Storage patching
———————————————————–

[root@ch01db01 patch_12.1.2.1.0.141206.1]# ./patchmgr -cells /home/oracle/cellhosts -reset_force
2016-02-05 11:17:07 +0100 :DONE: reset_force
[root@ch01db01 patch_12.1.2.1.0.141206.1]# ./patchmgr -cells /home/oracle/cellhosts -cleanup
2016-02-05 11:19:19 +0100        :Working: DO: Cleanup ...
2016-02-05 11:19:20 +0100        :SUCCESS: DONE: Cleanup
[root@ch01db01 patch_12.1.2.1.0.141206.1]# ./patchmgr -cells /home/oracle/cellhosts -patch_check_prereq
2016-02-05 11:20:56 +0100        :Working: DO: Check cells have ssh equivalence for root user. Up to 10 seconds per cell ...
 2016-02-05 11:20:57 +0100        :SUCCESS: DONE: Check cells have ssh equivalence for root user.
 2016-02-05 11:20:59 +0100        :Working: DO: Initialize files, check space and state of cell services. Up to 1 minute ...
 2016-02-05 11:21:01 +0100        :SUCCESS: DONE: Initialize files, check space and state of cell services.
 2016-02-05 11:22:19 +0100        :SUCCESS: DONE: Initialize files, check space and state of cell services.
 2016-02-05 11:22:19 +0100        :Working: DO: Copy, extract prerequisite check archive to cells. If required start md11 mismatched partner size correction. Up to 40 minutes ...
 2016-02-05 11:22:33 +0100 Wait correction of degraded md11 due to md partner size mismatch. Up to 30 minutes.
2016-02-05 11:22:34 +0100        :SUCCESS: DONE: Copy, extract prerequisite check archive to cells. If required start md11 mismatched partner size correction.
 2016-02-05 11:22:34 +0100        :Working: DO: Check prerequisites on all cells. Up to 2 minutes ...
 2016-02-05 11:23:38 +0100        :SUCCESS: DONE: Check prerequisites on all cells.
 2016-02-05 11:23:38 +0100        :Working: DO: Execute plugin check for Patch Check Prereq ...
 2016-02-05 11:23:38 +0100        :INFO: Patchmgr plugin start: Prereq check for exposure to bug 17854520 v1.3. Details in logfile /u02/p17885582_121210_Linux-x86-64/patch_12.1.2.1.0.141206.1/patchmgr.stdout.
 2016-02-05 11:23:38 +0100        :SUCCESS: No exposure to bug 17854520 with non-rolling patching
 2016-02-05 11:23:39 +0100        :SUCCESS: DONE: Execute plugin check for Patch Check Prereq.
[root@ch01db01 patch_12.1.2.1.0.141206.1]#
 [root@ch01db01 patch_12.1.2.1.0.141206.1]# ./patchmgr -cells /home/oracle/cellhosts -patch
********************************************************************************
 NOTE Cells will reboot during the patch or rollback process.
 NOTE For non-rolling patch or rollback, ensure all ASM instances using
 NOTE the cells are shut down for the duration of the patch or rollback.
 NOTE For rolling patch or rollback, ensure all ASM instances using
 NOTE the cells are up for the duration of the patch or rollback.
WARNING Do not start more than one instance of patchmgr.
 WARNING Do not interrupt the patchmgr session.
 WARNING Do not alter state of ASM instances during patch or rollback.
 WARNING Do not resize the screen. It may disturb the screen layout.
 WARNING Do not reboot cells or alter cell services during patch or rollback.
 WARNING Do not open log files in editor in write mode or try to alter them.
NOTE All time estimates are approximate.
 NOTE You may interrupt this patchmgr run in next 60 seconds with CONTROL-c.
********************************************************************************
2016-02-05 11:27:08 +0100        :Working: DO: Check cells have ssh equivalence for root user. Up to 10 seconds per cell ...
 2016-02-05 11:27:09 +0100        :SUCCESS: DONE: Check cells have ssh equivalence for root user.
 2016-02-05 11:27:12 +0100        :Working: DO: Initialize files, check space and state of cell services. Up to 1 minute ...
 2016-02-05 11:27:32 +0100        :SUCCESS: DONE: Initialize files, check space and state of cell services.
 2016-02-05 11:27:32 +0100        :Working: DO: Copy, extract prerequisite check archive to cells. If required start md11 mismatched partner size correction. Up to 40 minutes ...
 2016-02-05 11:27:45 +0100 Wait correction of degraded md11 due to md partner size mismatch. Up to 30 minutes.
2016-02-05 11:27:46 +0100        :SUCCESS: DONE: Copy, extract prerequisite check archive to cells. If required start md11 mismatched partner size correction.
 2016-02-05 11:27:46 +0100        :Working: DO: Check prerequisites on all cells. Up to 2 minutes ...
 2016-02-05 11:28:50 +0100        :SUCCESS: DONE: Check prerequisites on all cells.
 2016-02-05 11:28:50 +0100        :Working: DO: Copy the patch to all cells. Up to 3 minutes ...
 2016-02-05 11:29:22 +0100        :SUCCESS: DONE: Copy the patch to all cells.
 2016-02-05 11:29:24 +0100        :Working: DO: Execute plugin check for Patch Check Prereq ...
 2016-02-05 11:29:24 +0100        :INFO: Patchmgr plugin start: Prereq check for exposure to bug 17854520 v1.3. Details in logfile /u02/p17885582_121210_Linux-x86-64/patch_12.1.2.1.0.141206.1/patchmgr.stdout.
 2016-02-05 11:29:24 +0100        :SUCCESS: No exposure to bug 17854520 with non-rolling patching
 2016-02-05 11:29:25 +0100        :SUCCESS: DONE: Execute plugin check for Patch Check Prereq.
 2016-02-05 11:29:25 +0100 1 of 5 :Working: DO: Initiate patch on cells. Cells will remain up. Up to 5 minutes ...
 2016-02-05 11:29:37 +0100 1 of 5 :SUCCESS: DONE: Initiate patch on cells.
 2016-02-05 11:29:37 +0100 2 of 5 :Working: DO: Waiting to finish pre-reboot patch actions. Cells will remain up. Up to 45 minutes ...
 2016-02-05 11:30:37 +0100 Wait for patch pre-reboot procedures
2016-02-05 11:44:56 +0100 2 of 5 :SUCCESS: DONE: Waiting to finish pre-reboot patch actions.
 2016-02-05 11:44:56 +0100        :Working: DO: Execute plugin check for Patching ...
 2016-02-05 11:44:56 +0100        :SUCCESS: DONE: Execute plugin check for Patching.
 2016-02-05 11:44:56 +0100 3 of 5 :Working: DO: Finalize patch on cells. Cells will reboot. Up to 5 minutes ...
 2016-02-05 11:45:17 +0100 3 of 5 :SUCCESS: DONE: Finalize patch on cells.
 2016-02-05 11:45:17 +0100 4 of 5 :Working: DO: Wait for cells to reboot and come online. Up to 120 minutes ...
 2016-02-05 11:46:17 +0100 Wait for patch finalization and reboot
2016-02-05 13:09:24 +0100 4 of 5 :SUCCESS: DONE: Wait for cells to reboot and come online.
 2016-02-05 13:09:24 +0100 5 of 5 :Working: DO: Check the state of patch on cells. Up to 5 minutes ...
 2016-02-05 13:10:09 +0100 5 of 5 :SUCCESS: DONE: Check the state of patch on cells.
 2016-02-05 13:10:09 +0100        :Working: DO: Execute plugin check for Post Patch ...
 2016-02-05 13:10:10 +0100        :SUCCESS: DONE: Execute plugin check for Post Patch.
[root@ch01db01 patch_12.1.2.1.0.141206.1]#
[root@ch01db01 patch_12.1.2.1.0.141206.1]# dcli -c ch01celadm01 -l root 'imageinfo'
 ch01celadm01:
 ch01celadm01: Kernel version: 2.6.39-400.243.1.el6uek.x86_64 #1 SMP Wed Nov 26 09:15:35 PST 2014 x86_64
 ch01celadm01: Cell version: OSS_12.1.2.1.0_LINUX.X64_141206.1
 ch01celadm01: Cell rpm version: cell-12.1.2.1.0_LINUX.X64_141206.1-1.x86_64
 ch01celadm01:
 ch01celadm01: Active image version: 12.1.2.1.0.141206.1
 ch01celadm01: Active image activated: 2016-02-05 20:14:52 +0100
 ch01celadm01: Active image status: success
 ch01celadm01: Active system partition on device: /dev/md5
 ch01celadm01: Active software partition on device: /dev/md7
 ch01celadm01:
 ch01celadm01: Cell boot usb partition: /dev/sdac1
 ch01celadm01: Cell boot usb version: 12.1.2.1.0.141206.1
 ch01celadm01:
 ch01celadm01: Inactive image version: 12.1.1.1.1.140712
 ch01celadm01: Inactive image activated: 2014-08-06 11:50:09 +0200
 ch01celadm01: Inactive image status: success
 ch01celadm01: Inactive system partition on device: /dev/md6
 ch01celadm01: Inactive software partition on device: /dev/md8
 ch01celadm01:
 ch01celadm01: Inactive marker for the rollback: /boot/I_am_hd_boot.inactive
 ch01celadm01: Inactive grub config for the rollback: /boot/grub/grub.conf.inactive
 ch01celadm01: Inactive kernel version for the rollback: 2.6.39-400.128.17.el5uek
 ch01celadm01: Rollback to the inactive partitions: Possible
 [root@ch01db01 patch_12.1.2.1.0.141206.1]#

-----------------------------------------------------------
-- DB Server Patching
-----------------------------------------------------------

[root@ch01db02 dbnodeupdate]# ./dbnodeupdate.sh -h

Usage: dbnodeupdate.sh [ -u | -r | -c ] -l <baseurl|zip file> [-p] <phase> [-n] [-s] [-q] [-v] [-t] [-a] <alert.sh> [-b] [-m] | [-V] | [-h]
-u                       Upgrade
 -r                       Rollback
 -c                       Complete post actions (verify image status, cleanup, apply fixes, relink all homes, enable GI to start/start all domU's)
 -l <baseurl|zip file>    Baseurl (http or zipped iso file for the repository)
 -s                       Shutdown stack (domU's for VM) before upgrading/rolling back
 -p                       Bootstrap phase (1 or 2) only to be used when instructed by dbnodeupdate.sh
 -q                       Quiet mode (no prompting) only be used in combination with -t
 -n                       No backup will be created (Option disabled for systems being updated from Oracle Linux 5 to Oracle Linux 6)
 -t                       'to release' - used when in quiet mode or used when updating to one-offs/releases via 'latest' channel (requires 11.2.3.2.1)
 -v                       Verify prereqs only. Only to be used with -u and -l option
 -b                       Perform backup only
 -a <alert.sh>            Full path to shell script used for alert trapping
 -m                       Install / update-to exadata-sun/hp-computenode-minimum only (11.2.3.3.0 and later)
 -i                       Ignore /etc/oratab - relinking will be disabled. Only possible in combination with -c.
 -V                       Print version
 -h                       Print usage
For upgrading from releases 11.2.2.4.2 and later:
 Example using iso  : ./dbnodeupdate.sh -u -l /u01/p16432033_112321_Linux-x86-64.zip
 Example using http : ./dbnodeupdate.sh -u -l http://my-yum-repo.my-domain.com/yum/unknown/EXADATA/dbserver/11.2.3.3.0/base/x86_64/
 Example: ./dbnodeupdate.sh -u -l http://my-yum-repo.my-domain.com/yum/unknown/EXADATA/dbserver/11.2.3.2.1/base/x86_64/
 Example: ./dbnodeupdate.sh -u -l http://my-yum-repo.my-domain.com/yum/unknown/EXADATA/dbserver/11.2.3.3.0/base/x86_64/
For upgrading from releases 11.2.2.4.2 and later in quiet mode:
 Example: ./dbnodeupdate.sh -u -l /u01/p16432033_112321_Linux-x86-64.zip -q -t 11.2.3.2.1.130302
For completion steps:
 Example: ./dbnodeupdate.sh -c
For rollback:
 Example: ./dbnodeupdate.sh -r
For pre-req checks only:
 Example using iso  : ./dbnodeupdate.sh -u -l /u01/p16432033_112321_Linux-x86-64.zip -v
 Example using http : ./dbnodeupdate.sh -u -l http://my-yum-repo.my-domain.com/yum/unknown/EXADATA/dbserver/11.2.3.3.0/base/x86_64/ -v
For backup only:
 Example: ./dbnodeupdate.sh -u -l /u01/p16432033_112321_Linux-x86-64.zip -b
See MOS 1553103.1 for more examples
[root@ch01db02 dbnodeupdate]#

———————————– –DB Server patching Verification ———————————–

[root@ch01db02 dbnodeupdate]# ./dbnodeupdate.sh -u -l /u01/exapatch/p20170913_121210_Linux-x86-64/p20170913_121210_Linux-x86-64.zip -v
##########################################################################################################################
 #                                                                                                                        #
 # Guidelines for using dbnodeupdate.sh (rel. 4.18):                                                                      #
 #                                                                                                                        #
 # - Prerequisites for usage:                                                                                             #
 #         1. Refer to dbnodeupdate.sh options. See MOS 1553103.1                                                         #
 #         2. Use the latest release of dbnodeupdate.sh. See patch 16486998                                               #
 #         3. Run the prereq check with the '-v' option.                                                                  #
 #                                                                                                                        #
 #   I.e.:  ./dbnodeupdate.sh -u -l /u01/my-iso-repo.zip -v                                                               #
 #          ./dbnodeupdate.sh -u -l http://my-yum-repo -v                                                                 #
 #                                                                                                                        #
 # - Prerequisite dependency check failures can happen due to customization:                                              #
 #     - The prereq check detects dependency issues that need to be addressed prior to running a successful update.       #
 #     - Customized rpm packages may fail the built-in dependency check and system updates cannot proceed until resolved. #
 #                                                                                                                        #
 #   When upgrading from releases later than 11.2.2.4.2 to releases before 11.2.3.3.0:                                    #
 #      - Conflicting packages should be removed before proceeding the update.                                            #
 #                                                                                                                        #
 #   When upgrading to releases 11.2.3.3.0 or later:                                                                      #
 #      - When the 'exact' package dependency check fails 'minimum' package dependency check will be tried.               #
 #      - When the 'minimum' package dependency check also fails,                                                         #
 #        the conflicting packages should be removed before proceeding.                                                   #
 #                                                                                                                        #
 # - As part of the prereq checks and as part of the update, a number of rpms will be removed.                            #
 #   This removal is required to preserve Exadata functioning. This should not be confused with obsolete packages.        #
 #      - See /var/log/cellos/packages_to_be_removed.txt for details on what packages will be removed.                    #
 #                                                                                                                        #
 # - In case of any problem when filing an SR, upload the following:                                                      #
 #      - /var/log/cellos/dbnodeupdate.log                                                                                #
 #      - /var/log/cellos/dbnodeupdate.<runid>.diag                                                                       #
 #      - where <runid> is the unique number of the failing run.                                                          #
 #                                                                                                                        #
 ##########################################################################################################################
Continue ? [y/n]
 y
(*) 2016-02-05 17:06:43: Unzipping helpers (/u01/exapatch/dbnodeupdate/dbupdate-helpers.zip) to /opt/oracle.SupportTools/dbnodeupdate_helpers
 (*) 2016-02-05 17:06:43: Initializing logfile /var/log/cellos/dbnodeupdate.log
 (*) 2016-02-05 17:06:44: Collecting system configuration settings. This may take a while...
 (*) 2016-02-05 17:07:10: Validating system settings for known issues and best practices. This may take a while...
 (*) 2016-02-05 17:07:10: Checking free space in /u01/exapatch/p20170913_121210_Linux-x86-64/iso.stage.050215170641
 (*) 2016-02-05 17:07:10: Unzipping /u01/exapatch/p20170913_121210_Linux-x86-64/p20170913_121210_Linux-x86-64.zip to /u01/exapatch/p20170913_121210_Linux-x86-64/iso.stage.050215170641, this may take a while
 (*) 2016-02-05 17:07:23: Original /etc/yum.conf moved to /etc/yum.conf.050215170641, generating new yum.conf
 (*) 2016-02-05 17:07:23: Generating Exadata repository file /etc/yum.repos.d/Exadata-computenode.repo
 (*) 2016-02-05 17:07:56: Validating the specified source location.
 (*) 2016-02-05 17:07:57: Cleaning up the yum cache.

—————————————————————————————————————————–
Running in prereq check mode
—————————————————————————————————————————–

Active Image version   : 12.1.1.1.1.140712
 Active Kernel version  : 2.6.39-400.128.17.el5uek
 Active LVM Name        : /dev/mapper/VGExaDb-LVDbSys1
 Inactive Image version : 12.1.1.1.0.131219
 Inactive LVM Name      : /dev/mapper/VGExaDb-LVDbSys2
 Current user id        : root
 Action                 : upgrade
 Upgrading to           : 12.1.2.1.0.141206.1 - Oracle Linux 5->6 upgrade
 Baseurl                : file:///var/www/html/yum/unknown/EXADATA/dbserver/050215170641/x86_64/ (iso)
 Iso file               : /u01/exapatch/p20170913_121210_Linux-x86-64/iso.stage.050215170641/exadata_ol6_base_repo_141206.1.iso
 Create a backup        : Yes (backup at update mandatory when updating from OL5 to OL6)
 Shutdown stack         : No (Currently stack is down)
 RPM exclusion list     : Function not available for OL5->OL6 upgrades
 RPM obsolete list      : Function not available for OL5->OL6 upgrades
 Exact dependencies     : Function not available for OL5->OL6 upgrades
 Minimum dependencies   : Function not available for OL5->OL6 upgrades
 Logfile                : /var/log/cellos/dbnodeupdate.log (runid: 050215170641)
 Diagfile               : /var/log/cellos/dbnodeupdate.050215170641.diag
 Server model           : SUN FIRE X4170 M3
 dbnodeupdate.sh rel.   : 4.18 (always check MOS 1553103.1 for the latest release of dbnodeupdate.sh)
 Note                   : After upgrading and rebooting run './dbnodeupdate.sh -c' to finish post steps.
The following known issues will be checked for and automatically corrected by dbnodeupdate.sh:
 (*) - Issue - Fix for CVE-2014-9295 AND ELSA-2014-1974
The following known issues will be checked for but require manual follow-up:
 (*) - Issue - Yum rolling update requires fix for 11768055 when Grid Infrastructure is below 11.2.0.2 BP12
 (*) - Issue - Exafusion silently enabled for database 12.1.0.2.0 with kernel 2.6.39-400.200 and later. See MOS 1947476.1 for more details.
---------------------------------------------------------------------------------------------------------------------
 NOTE:
 When upgrading to Oracle Linux 6 a backup is required for systems configured with logical volume manager (lvm).
 It appears no backup of the current image exist on the inactive lvm.
 This means a mandatory backup will be made using dbnodeupdate.sh before the actual update starts.
 ---------------------------------------------------------------------------------------------------------------------
----------------------------------------------------------------------------------

-------------------------------------------
 Prereq check finished successfully, check the above report for next steps.
 -----------------------------------------------------------------------------------------------------------------------------
(*) 2016-02-05 17:08:01: Cleaning up iso and temp mount points
[root@ch01db02 dbnodeupdate]#

———————————–

–DB Server patching Execution

———————————–

[root@ch01db02 dbnodeupdate]# ./dbnodeupdate.sh -u -l /u01/exapatch/p20170913_121210_Linux-x86-64/p20170913_121210_Linux-x86-64.zip
##########################################################################################################################
 #                                                                                                                        #
 # Guidelines for using dbnodeupdate.sh (rel. 4.18):                                                                      #
 #                                                                                                                        #
 # - Prerequisites for usage:                                                                                             #
 #         1. Refer to dbnodeupdate.sh options. See MOS 1553103.1                                                         #
 #         2. Use the latest release of dbnodeupdate.sh. See patch 16486998                                               #
 #         3. Run the prereq check with the '-v' option.                                                                  #
 #                                                                                                                        #
 #   I.e.:  ./dbnodeupdate.sh -u -l /u01/my-iso-repo.zip -v                                                               #
 #          ./dbnodeupdate.sh -u -l http://my-yum-repo -v                                                                 #
 #                                                                                                                        #
 # - Prerequisite dependency check failures can happen due to customization:                                              #
 #     - The prereq check detects dependency issues that need to be addressed prior to running a successful update.       #
 #     - Customized rpm packages may fail the built-in dependency check and system updates cannot proceed until resolved. #
 #                                                                                                                        #
 #   When upgrading from releases later than 11.2.2.4.2 to releases before 11.2.3.3.0:                                    #
 #      - Conflicting packages should be removed before proceeding the update.                                            #
 #                                                                                                                        #
 #   When upgrading to releases 11.2.3.3.0 or later:                                                                      #
 #      - When the 'exact' package dependency check fails 'minimum' package dependency check will be tried.               #
 #      - When the 'minimum' package dependency check also fails,                                                         #
 #        the conflicting packages should be removed before proceeding.                                                   #
 #                                                                                                                        #
 # - As part of the prereq checks and as part of the update, a number of rpms will be removed.                            #
 #   This removal is required to preserve Exadata functioning. This should not be confused with obsolete packages.        #
 #      - See /var/log/cellos/packages_to_be_removed.txt for details on what packages will be removed.                    #
 #                                                                                                                        #
 # - In case of any problem when filing an SR, upload the following:                                                      #
 #      - /var/log/cellos/dbnodeupdate.log                                                                                #
 #      - /var/log/cellos/dbnodeupdate.<runid>.diag                                                                       #
 #      - where <runid> is the unique number of the failing run.                                                          #
 #                                                                                                                        #
 ##########################################################################################################################
Continue ? [y/n]
 y
(*) 2016-02-05 17:09:38: Unzipping helpers (/u01/exapatch/dbnodeupdate/dbupdate-helpers.zip) to /opt/oracle.SupportTools/dbnodeupdate_helpers
 (*) 2016-02-05 17:09:38: Initializing logfile /var/log/cellos/dbnodeupdate.log
 (*) 2016-02-05 17:09:39: Collecting system configuration settings. This may take a while...
 (*) 2016-02-05 17:10:07: Validating system settings for known issues and best practices. This may take a while...
 (*) 2016-02-05 17:10:07: Checking free space in /u01/exapatch/p20170913_121210_Linux-x86-64/iso.stage.050215170936
 (*) 2016-02-05 17:10:07: Unzipping /u01/exapatch/p20170913_121210_Linux-x86-64/p20170913_121210_Linux-x86-64.zip to /u01/exapatch/p20170913_121210_Linux-x86-64/iso.stage.050215170936, this may take a while
 (*) 2016-02-05 17:10:19: Original /etc/yum.conf moved to /etc/yum.conf.050215170936, generating new yum.conf
 (*) 2016-02-05 17:10:19: Generating Exadata repository file /etc/yum.repos.d/Exadata-computenode.repo
 (*) 2016-02-05 17:10:42: Validating the specified source location.
 (*) 2016-02-05 17:10:43: Cleaning up the yum cache.
Active Image version   : 12.1.1.1.1.140712
 Active Kernel version  : 2.6.39-400.128.17.el5uek
 Active LVM Name        : /dev/mapper/VGExaDb-LVDbSys1
 Inactive Image version : 12.1.1.1.0.131219
 Inactive LVM Name      : /dev/mapper/VGExaDb-LVDbSys2
 Current user id        : root
 Action                 : upgrade
 Upgrading to           : 12.1.2.1.0.141206.1 - Oracle Linux 5->6 upgrade
 Baseurl                : file:///var/www/html/yum/unknown/EXADATA/dbserver/050215170936/x86_64/ (iso)
 Iso file               : /u01/exapatch/p20170913_121210_Linux-x86-64/iso.stage.050215170936/exadata_ol6_base_repo_141206.1.iso
 Create a backup        : Yes (backup at update mandatory when updating from OL5 to OL6)
 Shutdown stack         : No (Currently stack is down)
 RPM exclusion list     : Function not available for OL5->OL6 upgrades
 RPM obsolete list      : Function not available for OL5->OL6 upgrades
 Exact dependencies     : Function not available for OL5->OL6 upgrades
 Minimum dependencies   : Function not available for OL5->OL6 upgrade
 Logfile                : /var/log/cellos/dbnodeupdate.log (runid: 050215170936)
 Diagfile               : /var/log/cellos/dbnodeupdate.050215170936.diag
 Server model           : SUN FIRE X4170 M3
 dbnodeupdate.sh rel.   : 4.18 (always check MOS 1553103.1 for the latest release of dbnodeupdate.sh)
 Note                   : After upgrading and rebooting run './dbnodeupdate.sh -c' to finish post steps.
The following known issues will be checked for and automatically corrected by dbnodeupdate.sh:
 (*) - Issue - Fix for CVE-2014-9295 AND ELSA-2014-1974
The following known issues will be checked for but require manual follow-up:
 (*) - Issue - Yum rolling update requires fix for 11768055 when Grid Infrastructure is below 11.2.0.2 BP12
 (*) - Issue - Exafusion silently enabled for database 12.1.0.2.0 with kernel 2.6.39-400.200 and later. See MOS 1947476.1 for more details.
Continue ? [y/n]
 y
(*) 2016-02-05 17:11:59: Verifying GI and DB''s are shutdown
 (*) 2016-02-05 17:12:00: Collecting console history for diag purposes
 (*) 2016-02-05 17:12:32: Unmount of /boot successful
 (*) 2016-02-05 17:12:32: Check for /dev/sda1 successful
 (*) 2016-02-05 17:12:32: Mount of /boot successful
 (*) 2016-02-05 17:12:32: Disabling stack from starting
 (*) 2016-02-05 17:12:33: Performing filesystem backup to /dev/mapper/VGExaDb-LVDbSys2. Avg. 30 minutes (maximum 120) depends per environment.......
 (*) 2016-02-05 17:18:44: Backup successful
 (*) 2016-02-05 17:18:47: ExaWatcher stopped successful
 (*) 2016-02-05 17:19:07: EM Agent (in /u01/app/oracle/product/agent12c/core/12.1.0.4.0) stopped successfully
 (*) 2016-02-05 17:19:07: Capturing service status and file attributes. This may take a while...
 (*) 2016-02-05 17:19:12: Service status and file attribute report in: /etc/exadata/reports
 (*) 2016-02-05 17:19:12: Validating the specified source location.
 (*) 2016-02-05 17:19:13: Cleaning up the yum cache.
 (*) 2016-02-05 17:19:14: Executing OL5->OL6 upgrade steps, system is expected to reboot multiple times.
 (*) 2016-02-05 17:21:37: Initialize of Oracle Linux 6 Upgrade successful. Rebooting now...
Broadcast message from root (pts/0) (Thu Feb  5 17:21:37 2015):
The system is going down for reboot NOW!
[root@ch01db02 dbnodeupdate]#
[root@ch01db02 dbnodeupdate]# ./dbnodeupdate.sh -c

———————————–
–Output new Image Version
———————————–

[root@ch01db01 ibdiagtools]# imageinfo
Kernel version: 2.6.39-400.243.1.el6uek.x86_64 #1 SMP Wed Nov 26 09:15:35 PST 2014 x86_64
 Image version: 12.1.2.1.0.141206.1
 Image activated: 2016-02-05 18:24:46 +0100
 Image status: success
 System partition on device: /dev/mapper/VGExaDb-LVDbSys1

Installation Oracle Grid Infrastrucure 12c

####################################

LINUX Setup click here

####################################

Setup OS

– Disable the Firewall

[root@oel6srv01 ~]# service iptables save
[root@oel6srv01 ~]# service iptables stop
[root@oel6srv01 ~]# chkconfig iptables off
[root@oel6srv01 ~]# service iptables status
-- If you are using IPv6 firewall, enter:
 [root@oel6srv01 ~]# service ip6tables save
 [root@oel6srv01 ~]# service ip6tables stop
 [root@oel6srv01 ~]# chkconfig ip6tables off
 [root@oel6srv01 ~]# service ip6tables status


– Disable the SELinux

[root@oel6srv01 ~]# vi /etc/sysconfig/selinux


– DISABLED kdump

[root@oel6srv01 ~]# chkconfig kdump on
[root@oel6srv01 ~]# chkconfig --list |grep kdump
kdump           0:off   1:off   2:off   3:off   4:off   5:off   6:off


– Network Setup

Public Cluster interphases, VIPs and SCAN

Subnet 10.0.0.x
Netmask 255.255.255.0

Private Cluster interphases

Subnet  192.168.0.x
Netmask 255.255.255.0

 


– Kernel add or amend the following lines  to the “/etc/sysctl.conf” file.

# vi /etc/sysctl.conf
fs.aio-max-nr = 1048576
 fs.file-max = 6815744
 kernel.shmall = 2097152
 kernel.shmmax = 1062637568
 kernel.shmmni = 4096
 # semaphores: semmsl, semmns, semopm, semmni
 kernel.sem = 250 32000 100 128
 net.ipv4.ip_local_port_range = 9000 65500
 net.core.rmem_default=262144
 net.core.rmem_max=4194304
 net.core.wmem_default=262144
 net.core.wmem_max=1048586
--Activate the current Kernel parameters:
/sbin/sysctl -p

– Add the following lines to /etc/security/limits.conf

# vi  /etc/security/limits.conf
## Go to the end
 grid     soft     nproc      2047
 grid     hard     nproc     16384
 grid     soft     nofile     1024
 grid     hard     nofile    65536
 oracle   soft     nproc      2047
 oracle   hard     nproc     16384
 oracle   soft     nofile     1024
 oracle   hard     nofile    65536

– Add the following line to /etc/pam.d/login

# vi /etc/pam.d/login
session     required    pam_limits.so


– Disable secure linux

–making sure the SELINUX flag is set as follows.

# vi /etc/selinux/config
SELINUX=disabled


– NTP Setup

–If you are using NTP, you must add the “-x” option into the following line in the “/etc/sysconfig/ntpd” file.

# vi /etc/sysconfig/ntpd
OPTIONS="-x -u ntp:ntp -p /var/run/ntpd.pid"
# service ntpd restart
 -- OR STOP NTP SERVER the Grid will start CSSD in Active mode instead of Observe:
 # service ntpd stop
 # chkconfig ntpd off
---------------------------------------------
 - Mandatory Packages for Oracle Linux 6  and Red Hat Enterprise Linux 6
 ---------------------------------------------
 binutils-2.20.51.0.2-5.11.el6 (x86_64)
 compat-libcap1-1.10-1 (x86_64)
 compat-libstdc++-33-3.2.3-69.el6 (x86_64)
 compat-libstdc++-33-3.2.3-69.el6.i686
 gcc-4.4.4-13.el6 (x86_64)
 gcc-c++-4.4.4-13.el6 (x86_64)
 glibc-2.12-1.7.el6 (i686)
 glibc-2.12-1.7.el6 (x86_64)
 glibc-devel-2.12-1.7.el6 (x86_64)
 glibc-devel-2.12-1.7.el6.i686
 ksh
 libgcc-4.4.4-13.el6 (i686)
 libgcc-4.4.4-13.el6 (x86_64)
 libstdc++-4.4.4-13.el6 (x86_64)
 libstdc++-4.4.4-13.el6.i686
 libstdc++-devel-4.4.4-13.el6 (x86_64)
 libstdc++-devel-4.4.4-13.el6.i686
 libaio-0.3.107-10.el6 (x86_64)
 libaio-0.3.107-10.el6.i686
 libaio-devel-0.3.107-10.el6 (x86_64)
 libaio-devel-0.3.107-10.el6.i686
 make-3.81-19.el6
 sysstat-9.0.4-11.el6 (x86_64)

–Install the cvuqdisk RPM. Without cvuqdisk, Cluster Verification Utility cannot discover shared disks,
–and it raises the following error message “Package cvuqdisk not installed” when Cluster Verification Utility is executed.

–A copy of the cvuqdisk  package is present on the 1st Grid Infrastructure ZIP file.

— Log in as root.

  1. Use the following command to find if you have an existing version of the cvuqdisk package:
# rpm -qi cvuqdisk

2.  If you have an existing version, then enter the following command to deinstall the existing version:

rpm -e cvuqdisk

  1. Set the environment variable CVUQDISK_GRP to point to the group that will own cvuqdisk, typically oinstall. For example:
# CVUQDISK_GRP=oinstall; export CVUQDISK_GRP
4. Install the cvuqdisk package:
rpm -iv package
# rpm -iv cvuqdisk-1.0.9-1.rpm
--OR
you install oracle-rdbms-server-12cR1-preinstall


– OPTIONAL RPMs

--Minimum ODBC Drivers for Oracle and Red Hat Linux 6 on x86-64
 unixODBC-2.2.14-11.el6 (64-bit) or later
 unixODBC-devel-2.2.14-11.el6 (64-bit) or later

– UNIX Groups

/usr/sbin/groupadd -g 1000 oinstall
/usr/sbin/groupadd -g 1001 asmadmin
/usr/sbin/groupadd -g 1002 dba
/usr/sbin/groupadd -g 1003 asmdba
/usr/sbin/groupadd -g 1004 asmoper

–New optional roles which grant access to specific features like Data Guard, RMAN and Security have been added in 12c, but not implemented in this example.


– UNIX Users

useradd -u 1100 -g oinstall -G asmadmin,asmdba,asmoper grid
useradd -u 1101 -g oinstall -G asmdba,dba oracle


– Set SSH timeout wait to unlimited

# vi /etc/ssh/sshd_config
LoginGraceTime 0

 


– Create the u01 file system

[root@oel6srv01 ~]# fdisk /dev/sdb
The number of cylinders for this disk is set to 2871.
 There is nothing wrong with that, but this is larger than 1024,
 and could in certain setups cause problems with:
 1) software that runs at boot time (e.g., old versions of LILO)
 2) booting and partitioning software from other OSs
 (e.g., DOS FDISK, OS/2 FDISK)
Command (m for help): n
 Command action
 e   extended
 p   primary partition (1-4)
 p
Partition number (1-4): 1
 First cylinder (1-2871, default 1):
 Using default value 1
 Last cylinder or +size or +sizeM or +sizeK (1-2871, default 2871):
 Using default value 2871
Command (m for help): w
 The partition table has been altered!
Calling ioctl() to re-read partition table.
WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
 The kernel still uses the old table.
 The new table will be used at the next reboot.
 Syncing disks.
 [root@oel6srv01 ~]#
 [root@oel6srv01 ~]# fdisk -l
Disk /dev/sda: 21.4 GB, 21474836480 bytes
 255 heads, 63 sectors/track, 2610 cylinders
 Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot      Start         End      Blocks   Id  System
 /dev/sda1   *           1          13      104391   83  Linux
 /dev/sda2              14        2610    20860402+  8e  Linux LVM
Disk /dev/sdb: 23.6 GB, 23622320128 bytes
 255 heads, 63 sectors/track, 2871 cylinders
 Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot      Start         End      Blocks   Id  System
 /dev/sdb1               1        2871    23061276   83  Linux
[root@oel6srv01 ~]# mkfs.ext4 /dev/sdb1
 mke4fs 1.41.12 (17-May-2010)
 Filesystem label=
 OS type: Linux
 Block size=4096 (log=2)
 Fragment size=4096 (log=2)
 Stride=0 blocks, Stripe width=0 blocks
 1441792 inodes, 5765319 blocks
 288265 blocks (5.00%) reserved for the super user
 First data block=0
 Maximum filesystem blocks=4294967296
 176 block groups
 32768 blocks per group, 32768 fragments per group
 8192 inodes per group
 Superblock backups stored on blocks:
 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
 4096000
Writing inode tables: done
 Creating journal (32768 blocks): done
 Writing superblocks and filesystem accounting information: done
This filesystem will be automatically checked every 37 mounts or
 180 days, whichever comes first.  Use tune4fs -c or -i to override.
 [root@oel6srv01 ~]#
[root@oel6srv01 /]# mkdir u01
 [root@oel6srv01 /]# cat /etc/fstab
..
..
/dev/sdb1               /u01                    ext4    defaults        0 0
[root@oel6srv01 /]# mount /u01
 [root@oel6srv01 /]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
 15G  3.2G   12G  22% /
 /dev/sda1              99M   24M   71M  25% /boot
 tmpfs                 1.3G     0  1.3G   0% /dev/shm
 /dev/sdb1              22G  172M   21G   1% /u01

– Creation of GI and RDBMS directories

mkdir -p /u01/GRID/12.1.0.1
 mkdir -p /u01/app/product/12.1.0.1
#Oracle Base
 chown -R oracle:oinstall /u01/app
 chmod -R 775 /u01/app
#Oracle RDBMS Home
 chown -R oracle:oinstall /u01/app/product/12.1.0.1
 chmod -R 775 /u01/app/product/12.1.0.1
#Grid Home
 chown -R grid:oinstall /u01/GRID
 chmod -R 775 /u01/GRID/12.1.0.1

– Add this entries to the generic User Profile

# vi /etc/profile
if [ $USER = "oracle" ] || [ $USER = "grid" ]; then
 if [ $SHELL = "/bin/ksh" ]; then
 ulimit -p 16384
 ulimit -n 65536
 else
 ulimit -u 16384 -n 65536
 fi
 umask 022
 fi
 if [ $USER = "root" ]; then
 umask 022
 fi

— Configure the shared storage for ASM

##########################################################
##  Installation Oracle Grid Infrastructure
##########################################################

--Run 12c Cluvfy with the following options to verify that all prerequisites are met:
 ./runcluvfy.sh stage -post hwos -n oel6srv01,oel6srv02,oel6srv03,oel6srv04 -verbose

# Start the Grid Installation…..

[grid@oel6srv01 grid]$ ./runInstaller
 Starting Oracle Universal Installer...
Checking Temp space: must be greater than 120 MB.   Actual 39776 MB    Passed
 Checking swap space: must be greater than 150 MB.   Actual 4031 MB    Passed
 Checking monitor: must be configured to display at least 256 colors.    Actual 16777216    Passed
 Preparing to launch Oracle Universal Installer from /tmp/OraInstall2013-06-25_08-23-26PM. Please wait ..

 

##############################################################
##  Grid Infrastructure crsctl output
##############################################################

[grid@oel6srv01 ~]$ crsctl stat res -t
 --------------------------------------------------------------------------------
 Name           Target  State        Server                   State details
 --------------------------------------------------------------------------------
 Local Resources
 --------------------------------------------------------------------------------
 ora.ASMNET2LSNR_ASM.lsnr
 ONLINE  ONLINE       oel6srv01                STABLE
 ONLINE  ONLINE       oel6srv02                STABLE
 ONLINE  ONLINE       oel6srv03                STABLE
 ONLINE  ONLINE       oel6srv04                STABLE
 ora.DATA1.VOL_CLOUD01.advm
 ONLINE  ONLINE       oel6srv01                Volume device /dev/a
 sm/vol_cloud01-178 i
 s online,STABLE
 ONLINE  ONLINE       oel6srv02                Volume device /dev/a
 sm/vol_cloud01-178 i
 s online,STABLE
 ONLINE  ONLINE       oel6srv03                Volume device /dev/a
 sm/vol_cloud01-178 i
 s online,STABLE
 ONLINE  ONLINE       oel6srv04                Volume device /dev/a
 sm/vol_cloud01-178 i
 s online,STABLE
 ora.DATA1.dg
 OFFLINE OFFLINE      oel6srv01               STABLE
 OFFLINE OFFLINE      oel6srv02               STABLE
 ONLINE  ONLINE       oel6srv03                STABLE
 ONLINE  ONLINE       oel6srv04                STABLE
 ora.FRA1.dg
 OFFLINE OFFLINE      oel6srv01               STABLE
 OFFLINE OFFLINE      oel6srv02               STABLE
 ONLINE  ONLINE       oel6srv03                STABLE
 ONLINE  ONLINE       oel6srv04                STABLE
 ora.LISTENER.lsnr
 ONLINE  ONLINE       oel6srv01                STABLE
 ONLINE  ONLINE       oel6srv02                STABLE
 ONLINE  ONLINE       oel6srv03                STABLE
 ONLINE  ONLINE       oel6srv04                STABLE
 ora.OCRVOTING.dg
 OFFLINE OFFLINE      oel6srv01               STABLE
 OFFLINE OFFLINE      oel6srv02               STABLE
 ONLINE  ONLINE       oel6srv03                STABLE
 ONLINE  ONLINE       oel6srv04                STABLE
 ora.data1.vol_cloud01.acfs
 ONLINE  ONLINE       oel6srv01                mounted on /cloudfs,
 STABLE
 ONLINE  ONLINE       oel6srv02                mounted on /cloudfs,
 STABLE
 ONLINE  ONLINE       oel6srv03                mounted on /cloudfs,
 STABLE
 ONLINE  ONLINE       oel6srv04                mounted on /cloudfs,
 STABLE
 ora.net1.network
 ONLINE  ONLINE       oel6srv01                STABLE
 ONLINE  ONLINE       oel6srv02                STABLE
 ONLINE  ONLINE       oel6srv03                STABLE
 ONLINE  ONLINE       oel6srv04                STABLE
 ora.ons
 ONLINE  ONLINE       oel6srv01                STABLE
 ONLINE  ONLINE       oel6srv02                STABLE
 ONLINE  ONLINE       oel6srv03                STABLE
 ONLINE  ONLINE       oel6srv04                STABLE
 ora.proxy_advm
 ONLINE  ONLINE       oel6srv01                STABLE
 ONLINE  ONLINE       oel6srv02                STABLE
 ONLINE  ONLINE       oel6srv03                STABLE
 ONLINE  ONLINE       oel6srv04                STABLE
 --------------------------------------------------------------------------------
 Cluster Resources
 --------------------------------------------------------------------------------
 ora.LISTENER_SCAN1.lsnr
 1        ONLINE  ONLINE       oel6srv04                STABLE
 ora.MGMTLSNR
 1        ONLINE  ONLINE       oel6srv04                169.254.25.188 192.1
 68.0.114 192.168.0.1
 24,STABLE
 ora.asm
 1        ONLINE  ONLINE       oel6srv04                STABLE
 3        ONLINE  ONLINE       oel6srv03                STABLE
 ora.cvu
 1        ONLINE  ONLINE       oel6srv04                STABLE
 ora.mgmtdb
 1        ONLINE  ONLINE       oel6srv04                Open,STABLE
 ora.oc4j
 1        ONLINE  ONLINE       oel6srv01                STABLE
 ora.oel6srv01.vip
 1        ONLINE  ONLINE       oel6srv01                STABLE
 ora.oel6srv02.vip
 1        ONLINE  ONLINE       oel6srv02                STABLE
 ora.oel6srv03.vip
 1        ONLINE  ONLINE       oel6srv03                STABLE
 ora.oel6srv04.vip
 1        ONLINE  ONLINE       oel6srv04                STABLE
 ora.scan1.vip
 1        ONLINE  ONLINE       oel6srv04                STABLE
 --------------------------------------------------------------------------------

Oracle Cluster Name

#################################################
##    How to display Oracle Cluster name       ##
#################################################

Oracle Clusterware includes the utility cemutlo which provides 
cluster name and version.

[grid@lnxcld01 ~]$ cemutlo -h
Usage: /GRID_INFRA/product/11.2.0.3/bin/cemutlo.bin [-n] [-w]
        where:
        -n prints the cluster name
        -w prints the clusterware version in the following format:
                 <major_version>:<minor_version>:<vendor_info>


--Cluster Name
[grid@lnxcld01 ~]$ cemutlo -n
cloud01

--Cluster Version
[grid@lnxcld01 ~]$ cemutlo -w
2:1: