Oracle Tips and Tricks — David Fitzjarrell

May 8, 2011

"Sherman, set the WAYBAC machine for …"

Filed under: flashback — dfitzjarrell @ 22:22
Tags: ,

Archiving older data is a complex task; local, national and sometimes international regulations dictate when, how and for how long the archived data must remain available. Add to that the seemingly insurmountable task of storing all of this data electronically and what appears, from those outside the IT arena, to be a simple act can end up as anything but simple. Within the context of an Oracle database there are methods of archiving data, some simple, some a bit more complex but still within the realm of possibility. Let’s look at those options and what they can, and cannot, offer.

The first option which comes to mind (mine, anyway) involves partitioning, an Enterprise Edition option (which should not be a surprise since companies who generate reams of data to archive usually install this edition). Archiving in this scenario is fairly easy: convert the relevant partitions to stand-alone tables in their own tablespace, separate from the ‘live’ production data. If this data is now on its own storage it can even be moved to another database server to facilitate access and not impact daily production. Let’s look at the steps involved with this option. First let’s create a partitioned table:

 CREATE TABLE archive_test
( 
        dusty DATE,
        vol VARCHAR2(60),
        info NUMBER
)
PARTITION  BY RANGE ( dusty ) 
(
PARTITION really_old 
   VALUES LESS THAN ( TO_DATE('01-apr-1999','dd-mon-yyyy'))
   TABLESPACE older_than_dirt,
PARTITION quite_old 
   VALUES LESS THAN ( TO_DATE('01-jul-2004','dd-mon-yyyy'))
   TABLESPACE old_as_dirt,
PARTITION sorta_new
   VALUES LESS THAN ( TO_DATE('01-oct-2009','dd-mon-yyyy'))
   TABLESPACE newer,
PARTITION really_new
   VALUES LESS THAN ( TO_DATE('01-jan-2012','dd-mon-yyyy'))
   TABLESPACE newest
);

-- 
-- Create local prefixed index
--

CREATE INDEX i_archives_l ON archive_test ( dusty,vol ) 
LOCAL ( 
PARTITION i_otd_one TABLESPACE i_otd_one,
PARTITION i_oad_two TABLESPACE i_oad_two,
PARTITION i_nwr_three TABLESPACE i_nwr_three,
PARTITION i_nwst_four TABLESPACE i_nwst_four
);

The last partition of our table is set to accept all data through 01/01/2012 so archiving data simply involves converting the desired partition to a stand-alone table, preferably stored on a different diskgroup or array than the current production data. [Sometimes a new partition is created prior to archiving the old partition (or partitions) to keep data flowing into the partitioned table. We’ll presume we have enough ‘room’ to avoid creating a new partition at archive time.] For the sake of illustration let’s put the destination tablespace, ARCHIVED_TS, in a separate ASM diskgroup (doing this allows for the movement of the diskgroup to another physical server for use by a separate Oracle instance). To archive the partition REALLY_OLD to a stand-alone table named REALLY_OLD_TBL:

--
-- Create empty table matching partition definition
--
create table really_old_tbl
( 
        dusty date,
        vol varchar2(60),
        info number
) tablespace archived_ts;   -- Tablespace created in separate ASM diskgroup or on separate storage

--
-- Check row count in desired partition
--

select count(*)
from archive_test partition(really_old);

--
-- Move partition data to stand-alone table
--
alter table archive_test
exchange partition really_old with table really_old_tbl with validation;

--
-- Verify all rows written to destination table
--

select count(*)
from really_old_tbl;

--
-- Drop now-empty partition presuming row counts match
--
alter table archive_test drop partition really_old;

The data is now archived to a separate table and will no longer be available in ARCHIVE_TEST; this, however, makes partition QUITE_OLD the first partition resulting in any DUSTY value less than the upper partition limit being stored there, including values which should have been in REALLY_OLD. This may not be an issue as values that old may no longer be generated but it is an aspect to consider when archiving older data from a partitioned table.

A second method is available for those not using partitioning which involves creating an archive table from the source table by selecting the desired data (this will also work for partitioned tables and may be the option of choice if a single archive table is desired as the above illustrated method creates a new table for each partition to be archived):


--
-- Create table and copy data
--
create table really_old_tbl
tablespace archived_ts
as select *
from archive_test
where dusty <= [some date here];

--
-- Verify all data copied successfully
--
select *
from archive_test
where (dusty,vol,info) not in (select * from really_old_tbl)
and dusty <= [some date here];

--
-- Delete from source table
--
delete
from archive_test
where dusty <= [some date here];

commit;

The data is now archived to a separate table. Changing the create table statement to an insert statement can allow for ‘newer’ archived data to be stored in the same archive table; again a similar condition exists as any data within the archived range can still be inserted into the source table as no date limits may exist to restrict inserts. A trigger can be used to restrict such inserts as shown below:


SQL> create or replace trigger ins_chk_trg
  2  before insert on archive_test
  3  for each row
  4  declare
  5          mindt date;
  6  begin
  7          select max(dusty) into mindt from really_old_tbl;
  8          if :new.dusty 
SQL> insert into archive_test
  2  values (to_date('&mindt', 'RRRR-MM-DD HH24:MI:SS') - 1, 'Test value x', -1,to_date('&mindt', 'RRRR-MM-DD HH24:MI:SS') );
old   2: values (to_date('&mindt', 'RRRR-MM-DD HH24:MI:SS') - 1, 'Test value x', -1,to_date('&mindt', 'RRRR-MM-DD HH24:MI:SS') )
new   2: values (to_date('1999-03-08 14:17:40', 'RRRR-MM-DD HH24:MI:SS') - 1, 'Test value x', -1,to_date('1999-03-08 14:17:40', 'RRRR-MM-DD HH24:MI:SS') )
insert into archive_test
            *
ERROR at line 1:
ORA-20987: Data (07-MAR-99) outside of acceptable date range
ORA-06512: at "BLORPO.INS_CHK_TRG", line 6
ORA-04088: error during execution of trigger 'BLORPO.INS_CHK_TRG'
SQL>

Such a trigger can be used on partitioned and non-partitioned tables to police the inserts and reject those bearing dates present in the archive table. As the archive table data increases (due to subsequent inserts) the trigger will recognize the new maximum and use it to reject inserts.

Lest we forget the external utilities both exp/imp and expdp/impdp can be used to archive data; the QUERY option to both exp and expdp allows extraction of specific data from a given table so that only the oldest data will be exported. Oracle recommends using a parameter file when using the QUERY option to avoid operating system specific escape characters. Additionally expdp allows for one query per table and multiple table:query pairs when specified with the schema.table:query format. A sample parameter file is shown below:


TABLES=employees, departments
QUERY=employees:'”WHERE department_id > 10 AND salary > 10000″‘
QUERY=departments:'”WHERE department_id > 10″‘
NOLOGFILE=y
DIRECTORY=dpump_dir1
DUMPFILE=exp1.dmp

 

This creates tables with the source names and a limited subset of the source data which can be imported into a different schema or different database. The imported tables can be renamed with the usual command (in releases 10g and later):


rename employees to employees_arch;
rename departments to departments_arch;

or in 11gR2 by using the REMAP_TABLE parameter to impdp:



REMAP_TABLE=employees:employees_arch
REMAP_TABLE=departments:departments_arch

[REMAP_TABLE will fail if the source table has named constraints in the same schema as those constraints will need to be created when the destination table is created. Constraints named SYS% will be created without error and the table or tables will be remapped.]

The final step in this process is to delete the now-archived data from the source table, as illustrated in the previous example for non-partitioned tables.

If you’re using a release older than 10g the process is a bit more time consuming, involving creating a new table with the desired name from the imported table then copying any index/constraint definitions to the new table, finally dropping the imported table once you’re certain the ‘renamed’ table has all necessary indexes and constraints in place.

Archiving older data is not a terribly difficult task (at least in an Oracle database) but it does take planning and attention to detail to ensure all of the desired data is properly archived and available for the end users. Maintaining the archived table (or tables) also takes planning as applications may need to be written to directly access the archive and, in the case of multiple archive tables, be ‘smart’ enough to be able to access the newer additions as they arrive. Remember, too, that the specifications for the archiving revolve around local, state, federal (in the U.S.) and possibly international regulations and the archiving scheme must be flexible enough to provide the required ‘window’ of access. It’s also true that archived data may outlast the regulations which established it (unless legal issues preclude maintaining the archive beyond the prescribed date range); in such cases a sound storage strategy is a must and it’s not unusual for archived data to go from Tier II (slower, cheaper disk) storage to Tier I (tape) as long as the data is still accessible as access speed is not a requirement for archived data.

“Sherman, set the WAYBAC machine for …”

October 31, 2008

Workin’ In The Mines

Filed under: flashback,General — dfitzjarrell @ 13:42

Flashback query, available since Oracle 9i, can really be a lifesaver when a need arises to resurrect data. But, what if the UNDO has been overwritten by another process? To the rescue comes DBMS_LOGMNR, the LogMiner package. This utility dredges through the redo logs (and archive logs, if they are still available) to return both the SQL statements to redo the transactions and SQL statements to undo those same transactions. Since Oracle will automatically replay any in-doubt transactions occuring at the time of a crash retrieving the redo SQL is usually not necessary. Returning the undo SQL, however, may be useful when flashback query cannot be used.

DBMS_LOGMNR has several procedures available, of which we’ll use three in this example: ADD_LOGFILE, START_LOGMNR and END_LOGMNR. You can start up LogMiner then add the logfiles you wish to ‘mine’, or add the logfiles then start the utility. I prefer the latter method, which is illustrated here. We’ll start by updating the EMP table:

SQL> update emp set comm = 999 where comm = 1000;

10 rows updated.

SQL> commit;

Commit complete.

SQL>

Now let’s mine the redo logs and see if we can undo that change:

SQL> --
SQL> -- Add every redo log to the 'mix' so LogMiner can
SQL> -- use them
SQL> --
SQL> select 'exec dbms_logmnr.add_logfile('''||member||''')'
  2  from v$logfile
  3
SQL>
SQL> spool add_logfiles.sql
SQL> /

'EXECDBMS_LOGMNR.ADD_LOGFILE('''||MEMBER||''')'
-------------------------------------------------------------------------------------
exec dbms_logmnr.add_logfile('/zing/flork/dapplenap/redo01.log')
exec dbms_logmnr.add_logfile('/zang/flork/dapplenap/redo02.log')
exec dbms_logmnr.add_logfile('/zong/flork/dapplenap/redo03.log')

SQL> spool off
SQL>
SQL>
SQL> @add_logfiles
SQL> exec dbms_logmnr.add_logfile('/zing/flork/dapplenap/redo01.log')

PL/SQL procedure successfully completed.

SQL> exec dbms_logmnr.add_logfile('/zang/flork/dapplenap/redo02.log')

PL/SQL procedure successfully completed.

SQL> exec dbms_logmnr.add_logfile('/zong/flork/dapplenap/redo03.log')

PL/SQL procedure successfully completed.

SQL>
SQL> --
SQL> -- Fire up LogMiner
SQL> --
SQL> exec dbms_logmnr.start_logmnr(options => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG)

PL/SQL procedure successfully completed.

SQL>
SQL>
SQL> --
SQL> -- Prepare the environment for the output
SQL> --
SQL> set linesize 5000 trimspool on
SQL>
SQL>
SQL> --
SQL> -- Retrieve the SQL statements to 'undo' the
SQL> -- committed changes
SQL> --
SQL> select sql_undo
  2  from v$logmnr_contents
  3  where seg_owner = upper('&1')
  4
SQL>
SQL> spool undo_committed_changes.sql
SQL> /
Enter value for 1: ortofon
old   3: where seg_owner = upper('&1')
new   3: where seg_owner = upper('ortofon')

SQL_UNDO
-------------------------------------------------------------------------------------------------------------
update "ORTOFON"."EMP" set "COMM" = '1000' where "COMM" = '999' and ROWID = 'AAAanJAAEAAAAIvAAA';
update "ORTOFON"."EMP" set "COMM" = '1000' where "COMM" = '999' and ROWID = 'AAAanJAAEAAAAIvAAD';
update "ORTOFON"."EMP" set "COMM" = '1000' where "COMM" = '999' and ROWID = 'AAAanJAAEAAAAIvAAF';
update "ORTOFON"."EMP" set "COMM" = '1000' where "COMM" = '999' and ROWID = 'AAAanJAAEAAAAIvAAG';
update "ORTOFON"."EMP" set "COMM" = '1000' where "COMM" = '999' and ROWID = 'AAAanJAAEAAAAIvAAH';
update "ORTOFON"."EMP" set "COMM" = '1000' where "COMM" = '999' and ROWID = 'AAAanJAAEAAAAIvAAI';
update "ORTOFON"."EMP" set "COMM" = '1000' where "COMM" = '999' and ROWID = 'AAAanJAAEAAAAIvAAK';
update "ORTOFON"."EMP" set "COMM" = '1000' where "COMM" = '999' and ROWID = 'AAAanJAAEAAAAIvAAL';
update "ORTOFON"."EMP" set "COMM" = '1000' where "COMM" = '999' and ROWID = 'AAAanJAAEAAAAIvAAM';
update "ORTOFON"."EMP" set "COMM" = '1000' where "COMM" = '999' and ROWID = 'AAAanJAAEAAAAIvAAN';

SQL> spool off
SQL>
SQL>
SQL> --
SQL> -- Shut down LogMiner
SQL> --
SQL> exec dbms_logmnr.end_logmnr

PL/SQL procedure successfully completed.

SQL>

And we have displayed the statements necessary to undo the updates made to the EMP table earlier. This won’t work if the table is created NOLOGGING as no redo entries will be written for direct path loads and inserts using the /*+ append */ hint however other insert/update transactions will generate redo. Also notice that the original update was one statement, and the undo (from the redo logs) generates 10 statements, one for each row updated.

Redo logs were used in this example, however you can also use archivelogs as well (as noted earlier, they must still be available on the server). And you can ask Oracle to add redo logs and archivelogs as necessary; the CONTINUOUS_MINE option provides that functionality, requiring only that the first redo log be added via ADD_LOGFILE or the starting SCN for the transactions of interest be provided:

SQL> --
SQL> -- Add one redo log to the 'mix'
SQL> --
SQL> select 'exec dbms_logmnr.add_logfile('''||member||''')'
  2  from v$logfile
  3  where rownum = 1
  4
SQL>
SQL> spool add_logfiles.sql
SQL> /

'EXECDBMS_LOGMNR.ADD_LOGFILE('''||MEMBER||''')'
-------------------------------------------------------------------------------------
exec dbms_logmnr.add_logfile('/zing/flork/dapplenap/redo01.log')

SQL> spool off
SQL>
SQL>
SQL> @add_logfiles
SQL> exec dbms_logmnr.add_logfile('/zing/flork/dapplenap/redo01.log')

PL/SQL procedure successfully completed.

SQL>
SQL> --
SQL> -- Fire up LogMiner
SQL> --
SQL> -- The CONTINUOUS_MINE option cannot be used if the database 
SQL> -- is not running in ARCHIVELOG mode
SQL> --
SQL> -- But, hey, we are, so we're good to go
SQL> --
SQL> exec dbms_logmnr.start_logmnr(options => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG + DBMS_LOGMNR.CONTINUOUS_MINE)

PL/SQL procedure successfully completed.

SQL>
SQL>
SQL> --
SQL> -- Prepare the environment for the output
SQL> --
SQL> set linesize 5000 trimspool on
SQL>
SQL>
SQL> --
SQL> -- Retrieve the SQL statements to 'undo' the
SQL> -- committed changes
SQL> --
SQL> select sql_undo
  2  from v$logmnr_contents
  3  where seg_owner = upper('&1')
  4
SQL>
SQL> spool undo_committed_changes.sql
SQL> /
Enter value for 1: ortofon
old   3: where seg_owner = upper('&1')
new   3: where seg_owner = upper('ortofon')

SQL_UNDO
-------------------------------------------------------------------------------------------------------------
update "ORTOFON"."EMP" set "COMM" = '1000' where "COMM" = '999' and ROWID = 'AAAanJAAEAAAAIvAAA';
update "ORTOFON"."EMP" set "COMM" = '1000' where "COMM" = '999' and ROWID = 'AAAanJAAEAAAAIvAAD';
update "ORTOFON"."EMP" set "COMM" = '1000' where "COMM" = '999' and ROWID = 'AAAanJAAEAAAAIvAAF';
update "ORTOFON"."EMP" set "COMM" = '1000' where "COMM" = '999' and ROWID = 'AAAanJAAEAAAAIvAAG';
update "ORTOFON"."EMP" set "COMM" = '1000' where "COMM" = '999' and ROWID = 'AAAanJAAEAAAAIvAAH';
update "ORTOFON"."EMP" set "COMM" = '1000' where "COMM" = '999' and ROWID = 'AAAanJAAEAAAAIvAAI';
update "ORTOFON"."EMP" set "COMM" = '1000' where "COMM" = '999' and ROWID = 'AAAanJAAEAAAAIvAAK';
update "ORTOFON"."EMP" set "COMM" = '1000' where "COMM" = '999' and ROWID = 'AAAanJAAEAAAAIvAAL';
update "ORTOFON"."EMP" set "COMM" = '1000' where "COMM" = '999' and ROWID = 'AAAanJAAEAAAAIvAAM';
update "ORTOFON"."EMP" set "COMM" = '1000' where "COMM" = '999' and ROWID = 'AAAanJAAEAAAAIvAAN';

SQL> spool off
SQL>
SQL>
SQL> --
SQL> -- Shut down LogMiner
SQL> --
SQL> exec dbms_logmnr.end_logmnr

PL/SQL procedure successfully completed.

SQL>

We, of course, found the same records as in the prior example, but we didn’t need to include every log file in the database to get this to work; the CONTINUOUS_MINE option kept adding logs to the mix to find the information we requested. Yes, we supplied more than one option to the options parameter; we simply added the values together and DBMS_LOGMNR was able to know we wanted both options enabled.

LogMiner won’t solve every data resurrection problem, nor will flashback query, however knowing these options are available may make your life as a DBA a bit less hectic and stressful.

Well, we can dream.

January 9, 2008

Resurrecting The Dead (or Gee, I Thought That Data Was Gone Forever)

Filed under: flashback — dfitzjarrell @ 16:29

Occasionally an end-user does the unthinkable, and changes data permanently (or so it appears) without checking the results before issuing a commit. Ooops. It used to be that to restore that data one had basically two choices:

A) Restore the data from a known good copy made before the changes were enacted
B) Restore the table from the last known good export

Yes, there was a third choice, an incomplete tablespace recovery but that presumed the database to be in ARCHIVELOG mode and, gasp, there are databases running in NOARCHIVELOG mode where this wouldn’t work. So, what is a DBA to do? If you’re running 9i you can enable flashback query by using the dbms_flashback package, which was nice but rather cumbersome in implementation, as it only allowed you to query the flashback data while enabled so updates to existing tables (to restore known good data) were somewhat convoluted in execution; it was necesary to enable flashback mode, open a cursor, then disable flashback mode to effect the updates:

declare
  --
  -- Cursor to fetch the required data
  --
  cursor c1 is
  select flerg, snerm, anguplast
  from snangpiester;

  --
  -- A place to stash each row of the returned data
  --
  c1_rec c1%rowtype;

begin

  --
  -- We flashback the data image to before the problem began
  --
  dbms_flashback.enable_at_time(to_timestamp('16-SEP-2007 06:53:00 AM','DD-MON-RRRR HH:MI:SS AM'));

  --
  -- Open the cursor while in flashback mode
  --
  open c1;

  --
  -- Disable flashback mode so we can update the problem table
  --
  dbms_flashback.disable;

  --
  -- Go fetch the good data and update the problem table
  --
  loop
          fetch c1 into c1_rec;
          exit when c1%notfound;  -- say goodbye when there's no more data
          update snangpiester
          set anguplast = c1_rec.anguplast
          where flerg = c1_rec.flerg
          and snerm = c1_rec.snerm;
  end loop;

  --
  -- Commit the changes
  --

  commit;

end;
/

Enter Oracle 10g and later releases where flashback query is enabled and does not require the use of any additional packages. Flashback query can use the SCN or a timestamp value as a reference point, making the use of such queries much easier. Let’s see how using flashback query can resurrect data thought to be lost by committed user changes.

Flashback query syntax is as follows:

select [select list here]
from [table]
as of timestamp to_timestamp('date/time or timestamp string');

Note the highlighted text; the ‘as of’ syntax [described fully at http://docs.oracle.com] tells Oracle to use flashback mode to rebuild the data image as of the provided date and time utilising data found in the UNDO segments preserved by the undo_retention parameter setting. To illustrate the power of using flashback query let’s look at an example using the EMP table.

Koffi Cupps, assistant HR manager and part-time ankle model, wanted to increase the salaries of the sales team by 17 percent and add $85 to each salesman’s commission; unfortunately she updated the entire employee table without first making a copy of the original data and committed the changes before checking the results:

SQL> update emp
   2 set sal = sal*1.17, comm = nvl(comm, 0) + 85;

14 rows updated.

SQL> commit;

Commit complete.

SQL> select 
   2 from emp;

     EMPNO ENAME      JOB              MGR HIREDATE         SAL       COMM     DEPTNO
---------- ---------- --------- ---------- --------- ---------- ---------- ----------
      7369 SMITH      CLERK           7902 17-DEC-80        936         85         20
      7499 ALLEN      SALESMAN        7698 20-FEB-81       1872        385         30
      7521 WARD       SALESMAN        7698 22-FEB-81     1462.5        585         30
      7566 JONES      MANAGER         7839 02-APR-81    3480.75         85         20
      7654 MARTIN     SALESMAN        7698 28-SEP-81     1462.5       1485         30
      7698 BLAKE      MANAGER         7839 01-MAY-81     3334.5         85         30
      7782 CLARK      MANAGER         7839 09-JUN-81     2866.5         85         10
      7788 SCOTT      ANALYST         7566 09-DEC-82       3510         85         20
      7839 KING       PRESIDENT            17-NOV-81       5850         85         10
      7844 TURNER     SALESMAN        7698 08-SEP-81       1755         85         30
      7876 ADAMS      CLERK           7788 12-JAN-83       1287         85         20
      7900 JAMES      CLERK           7698 03-DEC-81     1111.5         85         30
      7902 FORD       ANALYST         7566 03-DEC-81       3510         85         20
      7934 MILLER     CLERK           7782 23-JAN-82       1521         85         10

14 rows selected.

SQL>

Fortunately her younger sister, Dixie, is the Oracle DBA for the company. Knowing the HR software was using Oracle 11.2.0.4 Dixie sprang into action to restore the data prior to the change (it was also a good thing this was early on a Monday morning and that the undo_retention was set to a sufficient size else Dixie could have received an ORA-01555 because the necessary undo blocks had been overwritten):

SQL> select *
   2  from emp as of timestamp to_timestamp(trunc(sysdate));

     EMPNO ENAME      JOB              MGR HIREDATE         SAL       COMM     DEPTNO
---------- ---------- --------- ---------- --------- ---------- ---------- ---------- 
      7369 SMITH      CLERK           7902 17-DEC-80        800                    20 
      7499 ALLEN      SALESMAN        7698 20-FEB-81       1600        300         30 
      7521 WARD       SALESMAN        7698 22-FEB-81       1250        500         30 
      7566 JONES      MANAGER         7839 02-APR-81       2975                    20 
      7654 MARTIN     SALESMAN        7698 28-SEP-81       1250       1400         30 
      7698 BLAKE      MANAGER         7839 01-MAY-81       2850                    30 
      7782 CLARK      MANAGER         7839 09-JUN-81       2450                    10 
      7788 SCOTT      ANALYST         7566 09-DEC-82       3000                    20 
      7839 KING       PRESIDENT            17-NOV-81       5000                    10 
      7844 TURNER     SALESMAN        7698 08-SEP-81       1500          0         30 
      7876 ADAMS      CLERK           7788 12-JAN-83       1100                    20
      7900 JAMES      CLERK           7698 03-DEC-81        950                    30 
      7902 FORD       ANALYST         7566 03-DEC-81       3000                    20 
      7934 MILLER     CLERK           7782 23-JAN-82       1300                    10

14 rows selected.

All Dixie needed to do was delete the incorrect data and insert the old values:

SQL> delete from emp;

14 rows deleted.

SQL> insert into emp
  2  select * from emp as of timestamp to_timestamp(trunc(sysdate));

14 rows created.

SQL> commit;

Commit complete.

Of course she also could have updated the ‘bad’ data using the flashback image:

SQL> update emp e
2 set (sal, comm) = (select sal, comm 
3 from emp 
4 as of timestamp to_timestamp(trunc(sysdate)) 
5 where empno = e.empno);

14 rows updated.

SQL> commit;

Commit complete.

SQL> select *
2 from emp;


     EMPNO ENAME      JOB              MGR HIREDATE         SAL       COMM     DEPTNO
---------- ---------- --------- ---------- --------- ---------- ---------- ---------- 
      7369 SMITH      CLERK           7902 17-DEC-80        800                    20 
      7499 ALLEN      SALESMAN        7698 20-FEB-81       1600        300         30 
      7521 WARD       SALESMAN        7698 22-FEB-81       1250        500         30 
      7566 JONES      MANAGER         7839 02-APR-81       2975                    20 
      7654 MARTIN     SALESMAN        7698 28-SEP-81       1250       1400         30 
      7698 BLAKE      MANAGER         7839 01-MAY-81       2850                    30 
      7782 CLARK      MANAGER         7839 09-JUN-81       2450                    10 
      7788 SCOTT      ANALYST         7566 09-DEC-82       3000                    20 
      7839 KING       PRESIDENT            17-NOV-81       5000                    10 
      7844 TURNER     SALESMAN        7698 08-SEP-81       1500          0         30 
      7876 ADAMS      CLERK           7788 12-JAN-83       1100                    20
      7900 JAMES      CLERK           7698 03-DEC-81        950                    30 
      7902 FORD       ANALYST         7566 03-DEC-81       3000                    20 
      7934 MILLER     CLERK           7782 23-JAN-82       1300                    10

14 rows selected.

SQL>

As you can see the original state of the data has been restored, Dixie Cupps is a hero and Koffi Cupps can try her update again, being careful to not commit the changes until she checks that they are correct. Oh, and after making a copy of the table BEFORE she issues any update statements.

If you’re planning on using flashback query on a regular basis and you’re running Oracle version 10gR2 or later it might be a wise idea to alter the UNDO tablespace to provide a guaranteed undo_retention window:

SQL> alter tablespace undotbs1 retention guarantee;

Tablespace altered.

SQL>

The tradeoff for this is that multiple DML operations may fail due to a lack of writable UNDO space; Oracle will fail transactions if there is insufficient UNDO space to honor both the undo_retention and the current transactional load since guaranteeing the undo retention will cause Oracle to keep unexpired undo data for the duration of the undo_retention period, whereas not guaranteeing the retention will allow Oracle to overwrite unexpired undo data should the need arise. And while Oracle can overwrite unexpired undo data it will do its level best to maintain the configured undo_retention even without the retention guarantee. Testing this on a non-production database, with a comparable workload, would be a very good idea. Such actions can easily be undone:

SQL> alter tablespace undotbs1 retention noguarantee;

Tablespace altered.

SQL>

in the event a problem such as multiple DML operations failing rears its ugly head.

Flashback query can be a very useful tool to the DBA in such situations, and also for some reporting tasks as data can be queried as of a given timestamp (again, as long as the undo_retention setting provides the necessary window). Let’s see how a salary report can be generated after salaries for some employees have been adjusted:

SQL> --
SQL> -- Get reference timestamp before changes
SQL> --
SQL> select to_timestamp(to_char(sysdate, 'DD-MON-YYYY HH:MI:SS AM')) curr_timestamp from dual;

CURR_TIMESTAMP
---------------------------------------------------------------------------
09-JAN-08 11.54.40.000000000 AM

SQL>
SQL> --
SQL> -- Change the data in EMP
SQL> --
SQL> update emp
  2  set sal = sal+120, comm = nvl(comm, 0) + 85
  3  where job = 'SALESMAN';

4 rows updated.

SQL>
SQL> --
SQL> -- Commit and make the changes "permanent"
SQL> --
SQL> commit;

Commit complete.

SQL>
SQL> --
SQL> -- Show the current state of the data
SQL> --
SQL> select *
  2  from emp;

     EMPNO ENAME      JOB              MGR HIREDATE         SAL       COMM     DEPTNO
---------- ---------- --------- ---------- --------- ---------- ---------- ---------- 
      7369 SMITH      CLERK           7902 17-DEC-80        800                    20 
      7499 ALLEN      SALESMAN        7698 20-FEB-81       1720        385         30 
      7521 WARD       SALESMAN        7698 22-FEB-81       1370        585         30 
      7566 JONES      MANAGER         7839 02-APR-81       2975                    20 
      7654 MARTIN     SALESMAN        7698 28-SEP-81       1370       1485         30 
      7698 BLAKE      MANAGER         7839 01-MAY-81       2850                    30 
      7782 CLARK      MANAGER         7839 09-JUN-81       2450                    10 
      7788 SCOTT      ANALYST         7566 09-DEC-82       3000                    20 
      7839 KING       PRESIDENT            17-NOV-81       5000                    10 
      7844 TURNER     SALESMAN        7698 08-SEP-81       1620         85         30 
      7876 ADAMS      CLERK           7788 12-JAN-83       1100                    20
      7900 JAMES      CLERK           7698 03-DEC-81        950                    30 
      7902 FORD       ANALYST         7566 03-DEC-81       3000                    20 
      7934 MILLER     CLERK           7782 23-JAN-82       1300                    10

14 rows selected.

SQL>
SQL>
SQL> --
SQL> -- Generate a report of employee number,
SQL> -- employee name, job, salary, percent increase,
SQL> -- commission, increase
SQL> --
SQL> with get_old as(
  2       select empno, ename, job, sal, nvl(comm,0) comm
  3       from emp
  4       as of timestamp to_timestamp('&curr_ts')
  5  ),
  6  get_curr as (
  7       select empno, ename, job, sal, comm
  8       from emp
  9  )
 10  select c.empno, c.ename, c.job, c.sal,
 11  (c.sal - o.sal)/o.sal*100 sal_incr,
 12  c.comm,
 13  (c.comm - o.comm) comm_incr
 14  from get_old o, get_curr c
 15  where c.empno = o.empno;

     EMPNO ENAME      JOB              SAL   SAL_INCR       COMM  COMM_INCR
---------- ---------- --------- ---------- ---------- ---------- ---------- 
      7369 SMITH      CLERK            800                     0 
      7499 ALLEN      SALESMAN        1720        7.5        385         85 
      7521 WARD       SALESMAN        1370        9.6        585         85 
      7566 JONES      MANAGER         2975                     0 
      7654 MARTIN     SALESMAN        1370        9.6       1485         85 
      7698 BLAKE      MANAGER         2850                     0 
      7782 CLARK      MANAGER         2450                     0 
      7788 SCOTT      ANALYST         3000                     0 
      7839 KING       PRESIDENT       5000                     0 
      7844 TURNER     SALESMAN        1620          8         85         85 
      7876 ADAMS      CLERK           1100                     0
      7900 JAMES      CLERK            950                     0 
      7902 FORD       ANALYST         3000                     0 
      7934 MILLER     CLERK           1300                     0

14 rows selected.

SQL>

Flashback query, in 10g and later releases, is one of the nicer enhancements to the database. How did we ever live without it? Oh, I guess by having copies of tables lying around that no one keeps track of …

Blog at WordPress.com.