Thursday, November 04, 2010

How to create Datafiles in a Data Guard (10g) environment

I came across this Data Guard situation lately where a datafile was added to the primary database, but failed during creation on the standby. The database version was 10.2.0.3 and standby_file_management was set to AUTO.

Standby_file_management is a parameter, and when set to AUTO on the standby site, will create datafiles automatically on the standby site, for every datafile created on the primary site.

I started getting alerts in a few minutes on the standby for the following error.

WARNING: File being created with same name as in Primary Existing file may be overwritten
File #309 added to control file as 'UNNAMED00309'.
Originally created as: '/disk1/oradata/primarydb01/sys06.dbf'
Recovery was unable to create the file as: '/disk1/oradata/primary/sys06.dbf'
Errors with log /disk1/archive/stbydb/sf2_1_215423_528599320.arc
MRP0: Background Media Recovery terminated with error 1119
Tue Feb 5 07:42:09 2008
Errors in file /disk1/oracle/admin/stbydb2/bdump/stbydb2_mrp0_12552.trc:
ORA-01119: error in creating database file '/disk1/oradata/sf12/sys06.dbf'
ORA-27054: NFS file system where the file is created or resides is not mounted with correct options
SVR4 Error: 13: Permission denied
Some recovered datafiles maybe left media fuzzy
Media recovery may continue but open resetlogs may fail
Tue Feb 5 07:42:13 2008
Errors in file /disk1/oracle/admin/stbydb2/bdump/stbydb2_mrp0_12552.trc:
ORA-01119: error in creating database file '/ora-disk/oradata/sf12/sys06.dbf'
ORA-27054: NFS file system where the file is created or resides is not mounted with correct options
SVR4 Error: 13: Permission denied
Tue Feb 5 07:42:13 2008

From the message it was clear that the datafile creation had failed on the standby, even though standby_file_management was set to AUTO. It failed for the reason that the directory the datafile was created in, "/disk1/oradata/primary/", was not there on the Physical Standby. However, we want the directory "/disk1/oradata/primary/" to be "/disk1/oradata/standby/" on the standby. This is taken care of by "DB_FILE_NAME_CONVERT" parameter set on the standby. However, if you have several lines for this parameter,

SQL> Show parameter db_file_name_convert

will display as empty. At that point, the only way to check that value is to view the parameter file.

So login to standby database and issue

Step 1: SQL> create pfile from spfile;

Now go to $ORACLE_HOME/dbs directory for that initialization parameter file.

In our case, we had clearly missed the directory that the datafile was created on the standby.

Change the pfile to add that destination

Db_file_name_convert= '/disk1/oradata/primary','/disk1/oradata/standby'

Step 2: Now login to the standby and cancel the recovery

SQL> alter database recover standby managed database cancel;

Database altered

Step 3: Shutdown and startup the standby database using new parameter file.



SQL> shutdown immediate;
ORA-01507: database not mounted



ORACLE instance shut down.

SQL> startup mount pfile=$ORACLE_HOME/dbs/initstandby.ora
ORACLE instance started.







SQL> create spfile from pfile;

Step 4: Start the managed recovery on the standby.

SQL> alter database recover managed standby database disconnect from session;

Database altered.

The above command will tell standby to start applying archivelogs from where it was missing. I got the following errors in the alert log this time:

MRP0: Background Managed Standby Recovery process started (stbydb2)
Managed Standby Recovery not using Real Time Apply
MRP0: Background Media Recovery terminated with error 1111
Wed Feb 6 13:50:05 2008
Errors in file /disk1/oracle/admin/stbydb2/bdump/stbydb2_mrp0_22855.trc:
ORA-01111: name for data file 309 is unknown - rename to correct file
ORA-01110: data file 309: '/disk1/oracle/product/10.2.0/dbs/UNNAMED00309'
ORA-01157: cannot identify/lock data file 309 - see DBWR trace file
ORA-01111: name for data file 309 is unknown - rename to correct file
ORA-01110: data file 309: '/disk1/oracle/product/10.2.0/dbs/UNNAMED00309'
Wed Feb 6 13:50:05 2008
Errors in file /disk1/oracle/admin/stbydb2/bdump/stbydb2_mrp0_22855.trc:
ORA-01111: name for data file 309 is unknown - rename to correct file
ORA-01110: data file 309: '/disk1/oracle/product/10.2.0/dbs/UNNAMED00309'
ORA-01157: cannot identify/lock data file 309 - see DBWR trace file
ORA-01111: name for data file 309 is unknown - rename to correct file
ORA-01110: data file 309: '/disk1/oracle/product/10.2.0/dbs/UNNAMED00309'
Wed Feb 6 13:50:05 2008
MRP0: Background Media Recovery process shutdown (stbydb2)
Wed Feb 6 13:50:05 2008
Completed: alter database recover managed standby database disconnect from session
Wed Feb 6 13:52:09 2008
Using STANDBY_ARCHIVE_DEST parameter default value as /disk2/archive/stbydb2/
Redo Shipping Client Connected as PUBLIC
-- Connected User is Valid
RFS[1]: Assigned to RFS process 23024

This is because whenever datafile creation fails, Oracle server will create an invisible datafile called UNNAMED and make it look like it is there under $ORACLE_HOME/dbs. When the automatic recovery is resumed, it looks at that UNNAMED file, and the standby fails again with the message:

ORA-01111: name for data file 309 is unknown - rename to correct file

Now you need to rename that file to the one that you want to create in order for the physical standby to continue applying remaining logs.

Step 5: So now when you try to issue the "rename" command, you will get this error:

SQL> alter database rename file '/disk1/oracle/product/10.2.0/dbs/UNNAMED00309' to
'/disk1/oradata/stbydb2/sys06.dbf'
*
ERROR at line 1:
ORA-01511: error in renaming log/data files
ORA-01275: Operation RENAME is not allowed if standby file management is automatic.

This is because RENAME (ADD/DROP,CREATE) operations on the standby is not allowed, when standby_file_manageent is set to AUTO. Now you need to change the parameter to MANUAL for the above command to work.

Step 6: SQL> alter system set STANDBY_FILE_MANAGEMENT=MANUAL scope=memory;

System altered.

Step 7: SQL> alter database rename file
'/disk1/oracle/product/10.2.0/dbs/UNNAMED00309' to '/disk1/oradata/stbydb2/sys06.dbf';
alter database rename file '/disk1/oracle/product/10.2.0/dbs/UNNAMED00309' to '/disk1/oradata/stbydb2/sys06.dbf'
*
ERROR at line 1:
ORA-01511: error in renaming log/data files
ORA-01141: error renaming data file 309 - new file
'/disk1/oradata/stbydb2/sys06.dbf' not found
ORA-01111: name for data file 309 is unknown - rename to correct file
ORA-01110: data file 309:
'/disk1/oracle/product/10.2.0/dbs/UNNAMED00309'
ORA-27037: unable to obtain file status
SVR4 Error: 2: No such file or directory
Additional information: 3



Oh... What now? Well, we can not rename to a file that doesn't already exist, so we actually need to create the datafile.

Step 8: SQL> alter database create datafile
'/disk1/oracle/product/10.2.0/dbs/UNNAMED00309' as '/disk1/oradata/stbydb2/sys06.dbf';

Database altered.

Yes, finally! Ok, datafile created. Let's put the automatic recovery back on the standby.

Step 9: SQL> alter database recover standby database;
alter database recover standby database
*
ERROR at line 1:
ORA-00279: change 9999250009 generated at 02/05/2008 07:28:43 needed for thread 1
ORA-00289: suggestion : /disk2/archive/stbydb2/sf2_1_215423_528599320.arc
ORA-00280: change 9999250009 for thread 1 is in sequence #215423



Now Oracle wants me to manually specify the archive log files to be applied. This is because I changed the standby_file_management parameter earlier to MANUAL. I need to put it back to AUTO.

Step 10: SQL> alter system set STANDBY_FILE_MANAGEMENT=AUTO scope=memory;

System altered.

Step 11: Now issue the command
SQL> recover automatic standby database;

Media recovery complete.

Check the alert log

Tail –f alert*

RFS[1]: Assigned to RFS process 27167
RFS[1]: Identified database type as 'physical standby'
Wed Feb 6 14:48:32 2008
RFS LogMiner: Client disabled from further notification
Wed Feb 6 14:48:45 2008
Media Recovery Log /disk2/archive/stbydb2/sf2_1_215425_528599320.arc
Wed Feb 6 14:50:04 2008
RFS[1]: No standby redo logfiles created
Wed Feb 6 14:51:32 2008
RFS[1]: Archived Log: '/disk2/archive/stbydb2/sf2_1_215542_528599320.arc'
Wed Feb 6 14:56:42 2008
Recovery created file /disk1/oradata/stbydb2/users121.dbf
Successfully added datafile 310 to media recovery
Datafile #310: '/disk1/oradata/stbydb2/users121.dbf'



Yes, it successfully created the datafile and the recovery resumed. All the missing 105 archive log files were applied against 2.5TB OLTP database.

Best practices and lesson learned: Whenever you add a datafile to the primary, login to standby and make sure it was created successfully. The best way to do that is to issue "tail –f alert*" in the background_dump_dest directory. If for some reason the datafile creation has failed, cancel the recovery on the standby, and do only the steps 6, 8, 10 and 11 in this document.

Tuesday, November 02, 2010

Oracle 10g Compression vs. 11g's Advanced Compression

This document is meant to show the difference between 10gR2 Table Compression feature and 11gR2’s Advanced Compression. Oracle provided table level compression feature in 10gR2. While this compression provided some storage reduction, 10g’s table compression only compressed the data during BULK LOAD operations. New and updated data were not compressed. With 11g’s Advanced Compression new and updated data are also compressed; achieving highest level in storage reduction, while providing performance improvements as compressed blocks result in more data being moved per I/O.

[Please understand that these test cases are only meant as examples to quickly analyze and understand the benefits of Advanced Compression in 11g. Depending on duplicate data in your environments, these results may vary]


Here is the example.

Following test case was executed in 10.2.0.1 database server.

A table called TEST was created without COMPRESSION option.

SQL> select table_name,compression from dba_tables where table_name = 'TEST';

TABLE_NAME COMPRESS
------------------------- -------------
TEST DISABLED


SQL> select bytes from dba_segments where segment_name = 'TEST';

SUM(BYTES)
------------------
92274688


The size of the table was around 92MB.

Now create another table called TEST_COMPRESSED with COMPRESS option.

SQL> create table TEST_COMPRESSED COMPRESS as select * from test;

Table created.

SQL> select table_name, compression from dba_tables where table_name like 'TEST
%';

TABLE_NAME COMPRESS
------------------------------ ---------------
TEST_COMPRESSED ENABLED
TEST DISABLED


Now let’s check the size of the COMPRESSED table.


SQL> select bytes from dba_segments where segment_name = 'TEST_COMPRESSED';

SUM(BYTES)
----------------
30408704

Check out the size of the COMPRESSED table. It is only 30MB, around 30% reduction in size. So far so good.

Now let’s do a plain insert into the COMPRESSED table.

SQL> insert into TEST_COMPRESSED select * from TEST;

805040 rows created.

SQL> commit;

Commit complete.

Let’s check the size of the COMPRESSED table.

SQL> select bytes from dba_segments where segment_name = 'TEST_COMPRESSED'

2 /

SUM(BYTES)
----------
117440512


Wow! From 30MB to 117MB? So, plain INSERT statement does not COMPRESS the data in 10g.

(You will see this is not the case with 11g)

Now let’s do the same insert with a BULK LOAD

SQL> insert /*+ APPEND */ into TEST_COMPRESSED select * from TEST;

805040 rows created.

SQL> commit;

Commit complete.

SQL> select bytes from dba_segments where segment_name = 'TEST_COMPRESSED';


SUM(BYTES)
----------
142606336

Ok, now the size of the COMPRESSED table is 142MB from 117MB. For the same number of rows, the table size only increased by 25MB. So BULK LOAD compresses the data.

Let’s check other DML statements such as DELETE and UPDATE against the COMPRESSED table.


SQL> delete from test_compressed where rownum < 100000;

99999 rows deleted.

SQL> commit;

Commit complete.

SQL> select bytes from dba_segments where segment_name = 'TEST_COMPRESSED';

SUM(BYTES)
----------
142606336

No change in total size of the table. DELETE has no impact as expected.


Let’s check UPDATE.

SQL> update test_compressed set object_name = 'XXXXXXXXXXXXXXXXXXXXXXXXXX' where
rownum < 100000;

99999 rows updated.

SQL> commit;


SQL> select bytes from dba_segments where segment_name = 'TEST_COMPRESSED';

SUM(BYTES)
----------
150994944

The table size is increased by 8MB? No compression for UPDATE statement either.


All this clearly shows that 10g’s Table COMPRESSION would work great for initial BULK LOADS, however subsequent UPDATE’s, DELETE’s and INSERT’s will not result in COMPRESSED blocks.







Now, let’s see 11g’s Test Results.

The following SQL statements were executed against 11.2.0.1 database version.


TEST table of 100MB in size was created as before.



SQL> select bytes from dba_segments where segment_name = 'TEST';

BYTES
----------
100663296

So 100MB of table created.

Let’s create a table with COMPRESS FOR ALL OPERATIONS option. This is only available in 11g.


SQL> create table test_compressed compress for all operations as select * from
test;

Table created.

SQL> select bytes from dba_segments where segment_name = 'TEST_COMPRESSED';

BYTES
----------
31457280


Check out the size of the compressed table vs. uncompressed table. 30% less space usage on a compressed table. Not a big difference compared to 10g.




Let’s check other DML statements.

Let’s do a plain insert to the compressed table.

SQL> insert into TEST_COMPRESSED select * from test;

789757 rows created.

SQL> commit;

Commit complete.

SQL> select bytes from dba_segments where segment_name = 'TEST_COMPRESSED';

BYTES
----------
75497472



11g’s Advanced compression, compressed 100MB of data to 40MB and inserted to the compressed table, WITHOUT BULK LOAD option.

Now let’s do the BULK LOAD onto 11g’s COMPRESSED table.


SQL> insert into /*+ APPEND */ test_compressed select * from TEST;

789757 rows created.

SQL> commit;

Commit complete.

SQL> select bytes from dba_segments where segment_name = 'TEST_COMPRESSED';

BYTES
----------
109051904
It has a same impact as PLAIN insert.

What about deletes and updates?


SQL> delete from test_compressed where rownum < 100000;

99999 rows deleted.

SQL> commit;

Commit complete.

SQL> select bytes from dba_segments where segment_name = 'TEST_COMPRESSED';

BYTES
----------
109051904


No change in deletes. This is expected as the blocks are compressed when the new rows are added to the existing blocks and that the threshold reaches PCTFREE.


SQL> update test_compressed set object_name = 'XXXXXXXXXXXXXXXXXXXXXXXXXX' where

2 rownum < 100000;

99999 rows updated.

SQL> commit;

Commit complete.

SQL> select bytes from dba_segments where segment_name = 'TEST_COMPRESSED';

BYTES
----------
109051904


There is no change in this case as existing blocks were able to accommodate updates. However the same update generated more data in 10g.


Summary:

The above tests prove that 11g’s Advanced Compression compresses Table Data for all INSERTS and UPDATES, resulting in huge storage reduction. However 10g’s compression only compresses data during BULK LOAD operations. Subsequent added/updated data is not compressed.