Oracle Tips and Tricks — David Fitzjarrell

October 31, 2008

Workin’ In The Mines

Filed under: flashback,General — dfitzjarrell @ 13:42

Flashback query, available since Oracle 9i, can really be a lifesaver when a need arises to resurrect data. But, what if the UNDO has been overwritten by another process? To the rescue comes DBMS_LOGMNR, the LogMiner package. This utility dredges through the redo logs (and archive logs, if they are still available) to return both the SQL statements to redo the transactions and SQL statements to undo those same transactions. Since Oracle will automatically replay any in-doubt transactions occuring at the time of a crash retrieving the redo SQL is usually not necessary. Returning the undo SQL, however, may be useful when flashback query cannot be used.

DBMS_LOGMNR has several procedures available, of which we’ll use three in this example: ADD_LOGFILE, START_LOGMNR and END_LOGMNR. You can start up LogMiner then add the logfiles you wish to ‘mine’, or add the logfiles then start the utility. I prefer the latter method, which is illustrated here. We’ll start by updating the EMP table:

SQL> update emp set comm = 999 where comm = 1000;

10 rows updated.

SQL> commit;

Commit complete.

SQL>

Now let’s mine the redo logs and see if we can undo that change:

SQL> --
SQL> -- Add every redo log to the 'mix' so LogMiner can
SQL> -- use them
SQL> --
SQL> select 'exec dbms_logmnr.add_logfile('''||member||''')'
  2  from v$logfile
  3
SQL>
SQL> spool add_logfiles.sql
SQL> /

'EXECDBMS_LOGMNR.ADD_LOGFILE('''||MEMBER||''')'
-------------------------------------------------------------------------------------
exec dbms_logmnr.add_logfile('/zing/flork/dapplenap/redo01.log')
exec dbms_logmnr.add_logfile('/zang/flork/dapplenap/redo02.log')
exec dbms_logmnr.add_logfile('/zong/flork/dapplenap/redo03.log')

SQL> spool off
SQL>
SQL>
SQL> @add_logfiles
SQL> exec dbms_logmnr.add_logfile('/zing/flork/dapplenap/redo01.log')

PL/SQL procedure successfully completed.

SQL> exec dbms_logmnr.add_logfile('/zang/flork/dapplenap/redo02.log')

PL/SQL procedure successfully completed.

SQL> exec dbms_logmnr.add_logfile('/zong/flork/dapplenap/redo03.log')

PL/SQL procedure successfully completed.

SQL>
SQL> --
SQL> -- Fire up LogMiner
SQL> --
SQL> exec dbms_logmnr.start_logmnr(options => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG)

PL/SQL procedure successfully completed.

SQL>
SQL>
SQL> --
SQL> -- Prepare the environment for the output
SQL> --
SQL> set linesize 5000 trimspool on
SQL>
SQL>
SQL> --
SQL> -- Retrieve the SQL statements to 'undo' the
SQL> -- committed changes
SQL> --
SQL> select sql_undo
  2  from v$logmnr_contents
  3  where seg_owner = upper('&1')
  4
SQL>
SQL> spool undo_committed_changes.sql
SQL> /
Enter value for 1: ortofon
old   3: where seg_owner = upper('&1')
new   3: where seg_owner = upper('ortofon')

SQL_UNDO
-------------------------------------------------------------------------------------------------------------
update "ORTOFON"."EMP" set "COMM" = '1000' where "COMM" = '999' and ROWID = 'AAAanJAAEAAAAIvAAA';
update "ORTOFON"."EMP" set "COMM" = '1000' where "COMM" = '999' and ROWID = 'AAAanJAAEAAAAIvAAD';
update "ORTOFON"."EMP" set "COMM" = '1000' where "COMM" = '999' and ROWID = 'AAAanJAAEAAAAIvAAF';
update "ORTOFON"."EMP" set "COMM" = '1000' where "COMM" = '999' and ROWID = 'AAAanJAAEAAAAIvAAG';
update "ORTOFON"."EMP" set "COMM" = '1000' where "COMM" = '999' and ROWID = 'AAAanJAAEAAAAIvAAH';
update "ORTOFON"."EMP" set "COMM" = '1000' where "COMM" = '999' and ROWID = 'AAAanJAAEAAAAIvAAI';
update "ORTOFON"."EMP" set "COMM" = '1000' where "COMM" = '999' and ROWID = 'AAAanJAAEAAAAIvAAK';
update "ORTOFON"."EMP" set "COMM" = '1000' where "COMM" = '999' and ROWID = 'AAAanJAAEAAAAIvAAL';
update "ORTOFON"."EMP" set "COMM" = '1000' where "COMM" = '999' and ROWID = 'AAAanJAAEAAAAIvAAM';
update "ORTOFON"."EMP" set "COMM" = '1000' where "COMM" = '999' and ROWID = 'AAAanJAAEAAAAIvAAN';

SQL> spool off
SQL>
SQL>
SQL> --
SQL> -- Shut down LogMiner
SQL> --
SQL> exec dbms_logmnr.end_logmnr

PL/SQL procedure successfully completed.

SQL>

And we have displayed the statements necessary to undo the updates made to the EMP table earlier. This won’t work if the table is created NOLOGGING as no redo entries will be written for direct path loads and inserts using the /*+ append */ hint however other insert/update transactions will generate redo. Also notice that the original update was one statement, and the undo (from the redo logs) generates 10 statements, one for each row updated.

Redo logs were used in this example, however you can also use archivelogs as well (as noted earlier, they must still be available on the server). And you can ask Oracle to add redo logs and archivelogs as necessary; the CONTINUOUS_MINE option provides that functionality, requiring only that the first redo log be added via ADD_LOGFILE or the starting SCN for the transactions of interest be provided:

SQL> --
SQL> -- Add one redo log to the 'mix'
SQL> --
SQL> select 'exec dbms_logmnr.add_logfile('''||member||''')'
  2  from v$logfile
  3  where rownum = 1
  4
SQL>
SQL> spool add_logfiles.sql
SQL> /

'EXECDBMS_LOGMNR.ADD_LOGFILE('''||MEMBER||''')'
-------------------------------------------------------------------------------------
exec dbms_logmnr.add_logfile('/zing/flork/dapplenap/redo01.log')

SQL> spool off
SQL>
SQL>
SQL> @add_logfiles
SQL> exec dbms_logmnr.add_logfile('/zing/flork/dapplenap/redo01.log')

PL/SQL procedure successfully completed.

SQL>
SQL> --
SQL> -- Fire up LogMiner
SQL> --
SQL> -- The CONTINUOUS_MINE option cannot be used if the database 
SQL> -- is not running in ARCHIVELOG mode
SQL> --
SQL> -- But, hey, we are, so we're good to go
SQL> --
SQL> exec dbms_logmnr.start_logmnr(options => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG + DBMS_LOGMNR.CONTINUOUS_MINE)

PL/SQL procedure successfully completed.

SQL>
SQL>
SQL> --
SQL> -- Prepare the environment for the output
SQL> --
SQL> set linesize 5000 trimspool on
SQL>
SQL>
SQL> --
SQL> -- Retrieve the SQL statements to 'undo' the
SQL> -- committed changes
SQL> --
SQL> select sql_undo
  2  from v$logmnr_contents
  3  where seg_owner = upper('&1')
  4
SQL>
SQL> spool undo_committed_changes.sql
SQL> /
Enter value for 1: ortofon
old   3: where seg_owner = upper('&1')
new   3: where seg_owner = upper('ortofon')

SQL_UNDO
-------------------------------------------------------------------------------------------------------------
update "ORTOFON"."EMP" set "COMM" = '1000' where "COMM" = '999' and ROWID = 'AAAanJAAEAAAAIvAAA';
update "ORTOFON"."EMP" set "COMM" = '1000' where "COMM" = '999' and ROWID = 'AAAanJAAEAAAAIvAAD';
update "ORTOFON"."EMP" set "COMM" = '1000' where "COMM" = '999' and ROWID = 'AAAanJAAEAAAAIvAAF';
update "ORTOFON"."EMP" set "COMM" = '1000' where "COMM" = '999' and ROWID = 'AAAanJAAEAAAAIvAAG';
update "ORTOFON"."EMP" set "COMM" = '1000' where "COMM" = '999' and ROWID = 'AAAanJAAEAAAAIvAAH';
update "ORTOFON"."EMP" set "COMM" = '1000' where "COMM" = '999' and ROWID = 'AAAanJAAEAAAAIvAAI';
update "ORTOFON"."EMP" set "COMM" = '1000' where "COMM" = '999' and ROWID = 'AAAanJAAEAAAAIvAAK';
update "ORTOFON"."EMP" set "COMM" = '1000' where "COMM" = '999' and ROWID = 'AAAanJAAEAAAAIvAAL';
update "ORTOFON"."EMP" set "COMM" = '1000' where "COMM" = '999' and ROWID = 'AAAanJAAEAAAAIvAAM';
update "ORTOFON"."EMP" set "COMM" = '1000' where "COMM" = '999' and ROWID = 'AAAanJAAEAAAAIvAAN';

SQL> spool off
SQL>
SQL>
SQL> --
SQL> -- Shut down LogMiner
SQL> --
SQL> exec dbms_logmnr.end_logmnr

PL/SQL procedure successfully completed.

SQL>

We, of course, found the same records as in the prior example, but we didn’t need to include every log file in the database to get this to work; the CONTINUOUS_MINE option kept adding logs to the mix to find the information we requested. Yes, we supplied more than one option to the options parameter; we simply added the values together and DBMS_LOGMNR was able to know we wanted both options enabled.

LogMiner won’t solve every data resurrection problem, nor will flashback query, however knowing these options are available may make your life as a DBA a bit less hectic and stressful.

Well, we can dream.

October 20, 2008

Lock ‘Em Up

Filed under: General — dfitzjarrell @ 17:16

There has been, recently, a large number of requests to google.com about Oracle database locks. I’ve blogged before on locks and how to see who has which object locked, but I haven’t said much as to the nature and intent of locks. I think it’s about time that topic was addressed.

Oracle, in the on-line documentation, states:

“Locks are mechanisms that prevent destructive interaction between transactions accessing the same resource — either user objects such as tables and rows or system objects not visible to users, such as shared data structures in memory and data dictionary rows.”

[To get a bit picky here, locks on shared data structures in memory are usually called latches, but they perform the same task so we’ll carry on with the discussion.]

So a lock, when taken, prevents ‘destructive interaction’ between transactions. What DOES that mean? Possibly an example between two sessions trying to modify data in the same table will illustrate (this would be a ‘blocking’ lock, and querying V$LOCK where block 0 would show session #2 blocked by session #1). Session #1 locks and modifies data in the EMP table by taking both a TX (transaction row-level) and a TM (row exclusive) lock:

SQL> select empno, ename, sal
  2  from emp;

     EMPNO ENAME             SAL
---------- ---------- ----------
      7935 SMITH             900
      7369 SMYTHE            800
      7499 ALLEN            1600
      7521 WARD             1250
      7566 JONES            2975
      7654 MARTIN           1250
      7698 BLAKE            2850
      7782 CLARK            2450
      7788 SCOTT            3000
      7839 KING             5000
      7844 TURNER           1500

     EMPNO ENAME             SAL
---------- ---------- ----------
      7876 ADAMS            1100
      7900 JAMES             950
      7902 FORD             3000
      7934 MILLER           1300
      7955 SMITH             900
      7956 SMYTHE            800
      7957 ALLEN            1600
      7958 WARD             1250
      7959 JONES            2975
      7960 MARTIN           1250
      7961 BLAKE            2850

     EMPNO ENAME             SAL
---------- ---------- ----------
      7962 CLARK            2450
      7963 SCOTT            3000
      7964 KING             5000
      7965 TURNER           1500
      7966 ADAMS            1100
      7967 JAMES             950
      7968 FORD             3000
      7969 MILLER           1300
      8000 SMITH             900
      8001 SMYTHE         1382.4
      8002 ALLEN          2764.8

     EMPNO ENAME             SAL
---------- ---------- ----------
      8003 WARD             2160
      8004 JONES          5140.8
      8005 MARTIN           2160
      8006 BLAKE          4924.8
      8007 CLARK          4233.6
      8008 SCOTT            5184
      8009 KING             8640
      8010 TURNER           2592
      8011 ADAMS          1900.8
      8012 JAMES          1641.6
      8013 FORD             5184

     EMPNO ENAME             SAL
---------- ---------- ----------
      8014 MILLER         2246.4
      8015 SMITH          1555.2
      8016 SMYTHE         1382.4
      8017 ALLEN          2764.8
      8018 WARD             2160
      8019 JONES          5140.8
      8020 MARTIN           2160
      8021 BLAKE          4924.8
      8022 CLARK          4233.6
      8023 SCOTT            5184
      8024 KING             8640

     EMPNO ENAME             SAL
---------- ---------- ----------
      8025 TURNER           2592
      8026 ADAMS          1900.8
      8027 JAMES          1641.6
      8028 FORD             5184
      8029 MILLER         2246.4

60 rows selected.

SQL>
SQL> update emp
  2  set sal = sal *.985
  3  where empno 
SQL> exec dbms_lock.sleep(60)

At this point we start session #2 and try to modify the same data in the same table:

SQL> select empno, ename, sal
  2  from emp;

     EMPNO ENAME             SAL
---------- ---------- ----------
      7935 SMITH             900
      7369 SMYTHE            800
      7499 ALLEN            1600
      7521 WARD             1250
      7566 JONES            2975
      7654 MARTIN           1250
      7698 BLAKE            2850
      7782 CLARK            2450
      7788 SCOTT            3000
      7839 KING             5000
      7844 TURNER           1500

     EMPNO ENAME             SAL
---------- ---------- ----------
      7876 ADAMS            1100
      7900 JAMES             950
      7902 FORD             3000
      7934 MILLER           1300
      7955 SMITH             900
      7956 SMYTHE            800
      7957 ALLEN            1600
      7958 WARD             1250
      7959 JONES            2975
      7960 MARTIN           1250
      7961 BLAKE            2850

     EMPNO ENAME             SAL
---------- ---------- ----------
      7962 CLARK            2450
      7963 SCOTT            3000
      7964 KING             5000
      7965 TURNER           1500
      7966 ADAMS            1100
      7967 JAMES             950
      7968 FORD             3000
      7969 MILLER           1300
      8000 SMITH             900
      8001 SMYTHE         1382.4
      8002 ALLEN          2764.8

     EMPNO ENAME             SAL
---------- ---------- ----------
      8003 WARD             2160
      8004 JONES          5140.8
      8005 MARTIN           2160
      8006 BLAKE          4924.8
      8007 CLARK          4233.6
      8008 SCOTT            5184
      8009 KING             8640
      8010 TURNER           2592
      8011 ADAMS          1900.8
      8012 JAMES          1641.6
      8013 FORD             5184

     EMPNO ENAME             SAL
---------- ---------- ----------
      8014 MILLER         2246.4
      8015 SMITH          1555.2
      8016 SMYTHE         1382.4
      8017 ALLEN          2764.8
      8018 WARD             2160
      8019 JONES          5140.8
      8020 MARTIN           2160
      8021 BLAKE          4924.8
      8022 CLARK          4233.6
      8023 SCOTT            5184
      8024 KING             8640

     EMPNO ENAME             SAL
---------- ---------- ----------
      8025 TURNER           2592
      8026 ADAMS          1900.8
      8027 JAMES          1641.6
      8028 FORD             5184
      8029 MILLER         2246.4

60 rows selected.

SQL>
SQL> update emp
  2  set sal = sal * 1.005
  3  where empno <= 8000;

The update process stops here, waiting for session #1 to commit its changes. Session #1 has an exclusive lock on the data we want to modify in session #2, so that session needs to wait until the lock is cleared (by a commit or a rollback) to effect any changes. As we continue on in both sessions we see that session #1 has ended its waiting period and committed its changes:

PL/SQL procedure successfully completed.

SQL>
SQL> commit;

Commit complete.

SQL>
SQL> select empno, ename, sal
  2  from emp;

     EMPNO ENAME             SAL
---------- ---------- ----------
      7935 SMITH           886.5
      7369 SMYTHE            788
      7499 ALLEN            1576
      7521 WARD          1231.25
      7566 JONES         2930.38
      7654 MARTIN        1231.25
      7698 BLAKE         2807.25
      7782 CLARK         2413.25
      7788 SCOTT            2955
      7839 KING             4925
      7844 TURNER         1477.5

     EMPNO ENAME             SAL
---------- ---------- ----------
      7876 ADAMS          1083.5
      7900 JAMES          935.75
      7902 FORD             2955
      7934 MILLER         1280.5
      7955 SMITH           886.5
      7956 SMYTHE            788
      7957 ALLEN            1576
      7958 WARD          1231.25
      7959 JONES         2930.38
      7960 MARTIN        1231.25
      7961 BLAKE         2807.25

     EMPNO ENAME             SAL
---------- ---------- ----------
      7962 CLARK         2413.25
      7963 SCOTT            2955
      7964 KING             4925
      7965 TURNER         1477.5
      7966 ADAMS          1083.5
      7967 JAMES          935.75
      7968 FORD             2955
      7969 MILLER         1280.5
      8000 SMITH           886.5
      8001 SMYTHE         1382.4
      8002 ALLEN          2764.8

     EMPNO ENAME             SAL
---------- ---------- ----------
      8003 WARD             2160
      8004 JONES          5140.8
      8005 MARTIN           2160
      8006 BLAKE          4924.8
      8007 CLARK          4233.6
      8008 SCOTT            5184
      8009 KING             8640
      8010 TURNER           2592
      8011 ADAMS          1900.8
      8012 JAMES          1641.6
      8013 FORD             5184

     EMPNO ENAME             SAL
---------- ---------- ----------
      8014 MILLER         2246.4
      8015 SMITH          1555.2
      8016 SMYTHE         1382.4
      8017 ALLEN          2764.8
      8018 WARD             2160
      8019 JONES          5140.8
      8020 MARTIN           2160
      8021 BLAKE          4924.8
      8022 CLARK          4233.6
      8023 SCOTT            5184
      8024 KING             8640

     EMPNO ENAME             SAL
---------- ---------- ----------
      8025 TURNER           2592
      8026 ADAMS          1900.8
      8027 JAMES          1641.6
      8028 FORD             5184
      8029 MILLER         2246.4

60 rows selected.

SQL>

We can see that the only changes to see are those committed by session #1. Session #2 now has ‘free reign’ over that same data, and implements its changes:


31 rows updated.

SQL>
SQL> exec dbms_lock.sleep(60)

PL/SQL procedure successfully completed.

SQL>
SQL> commit;

Commit complete.

SQL>
SQL> select empno, ename, sal
  2  from emp;

     EMPNO ENAME             SAL
---------- ---------- ----------
      7935 SMITH          890.93
      7369 SMYTHE         791.94
      7499 ALLEN         1583.88
      7521 WARD          1237.41
      7566 JONES         2945.03
      7654 MARTIN        1237.41
      7698 BLAKE         2821.29
      7782 CLARK         2425.32
      7788 SCOTT         2969.78
      7839 KING          4949.63
      7844 TURNER        1484.89

     EMPNO ENAME             SAL
---------- ---------- ----------
      7876 ADAMS         1088.92
      7900 JAMES          940.43
      7902 FORD          2969.78
      7934 MILLER         1286.9
      7955 SMITH          890.93
      7956 SMYTHE         791.94
      7957 ALLEN         1583.88
      7958 WARD          1237.41
      7959 JONES         2945.03
      7960 MARTIN        1237.41
      7961 BLAKE         2821.29

     EMPNO ENAME             SAL
---------- ---------- ----------
      7962 CLARK         2425.32
      7963 SCOTT         2969.78
      7964 KING          4949.63
      7965 TURNER        1484.89
      7966 ADAMS         1088.92
      7967 JAMES          940.43
      7968 FORD          2969.78
      7969 MILLER         1286.9
      8000 SMITH          890.93
      8001 SMYTHE         1382.4
      8002 ALLEN          2764.8

     EMPNO ENAME             SAL
---------- ---------- ----------
      8003 WARD             2160
      8004 JONES          5140.8
      8005 MARTIN           2160
      8006 BLAKE          4924.8
      8007 CLARK          4233.6
      8008 SCOTT            5184
      8009 KING             8640
      8010 TURNER           2592
      8011 ADAMS          1900.8
      8012 JAMES          1641.6
      8013 FORD             5184

     EMPNO ENAME             SAL
---------- ---------- ----------
      8014 MILLER         2246.4
      8015 SMITH          1555.2
      8016 SMYTHE         1382.4
      8017 ALLEN          2764.8
      8018 WARD             2160
      8019 JONES          5140.8
      8020 MARTIN           2160
      8021 BLAKE          4924.8
      8022 CLARK          4233.6
      8023 SCOTT            5184
      8024 KING             8640

     EMPNO ENAME             SAL
---------- ---------- ----------
      8025 TURNER           2592
      8026 ADAMS          1900.8
      8027 JAMES          1641.6
      8028 FORD             5184
      8029 MILLER         2246.4

60 rows selected.

SQL>

No destructive interaction between the sessions could occur; session #2 could not update the data locked by session #1 until that lock was released. Once the first lock was gone a second lock, by another session, could be taken so further updates could be processed. Were there a third session attempting to modify this same data it, too, would be locked as session #2 was before, preventing multiple sessions from updating the same data immediately on top of, and interfering with, the current transaction.

Locks can be a problem, however, especially when one session has locked data another session needs, and the waiting session has locked data the first session wants to modify. This is known as a deadlock, and later releases of Oracle detect such conditions and issue a rollback on the ‘newer’ transaction (here ‘newer’ means the transaction which didn’t lock the data its waiting upon but has data a transaction higher up the queue needs to access or modify). To give a representation of this:

— Transaction A modifies data in the EMP table
— Transaction B modifies data in the DEPT table
— Transaction A now needs to modify records in the DEPT table,
but transaction B has those records locked
— Transaction B now wants to modify records in the EMP table

that transaction A has locked
— Oracle will ‘rollback’ transaction B as it’s in the queue after
transaction A and holds data that transaction A needs to complete
its work

Older releases of Oracle (pre-9.0) will let a deadlock continue forever, so the DBA will need to handle such occurrences in 8.1.7.4 and earlier versions.

Some locks don’t affect anyone, such as shared row locks (taken by plain old SELECT statements); others, such as those taken by DDL statements, prevent any action against the affected object. Additionally, since DDL transactions implicitly commit before and after the statement execution the exclusive object lock is only active for a short period of time, which in many cases won’t noticeably affect other sessions.

Locks may be considered by some to be inconvenient, but they are necessary to preserve data integrity and provide transaction isolation (see the example above). And transaction isolation and data integrity are important if your data has any worth at all (and everyone’s data is worth something to their user community).

“Lock ’em up, Sheriff!”

October 14, 2008

Is Anybody There?

Filed under: General — dfitzjarrell @ 18:14

It’s interesting to see what people submit to google.com; lately this query came to my attention:

“how to see when the table is populated in oracle”

which, on the face of it, seems to be a rather silly question, with a somewhat obvious answer:

“select count(*) from [insert table name here];”

But, is that a truly reliable way to take the inventory of a table’s contents? Let’s investigate this further and see what might not be so obvious from a cursory glance.

I’ve discussed, in more than one post, how Oracle provides a consistent image of the data for the point in time when a query begins. And I’ve proven you can’t rely upon a select statement to provide a repeatable and usable result to manually generate sequential numbers. This same mechanism can wreak havoc on determining whether a table is populated or not because at the time you may be querying the table’s contents someone else may be deleting those same records that appear before your very eyes. Absent a commit in the ‘offending’ session you’ll never know that Mercantile Flabbenjammitz just obliterated all of that data because, through the miracle of the UNDO records, you can still see every byte that used to be stored in that table. Along comes a commit and — whoosh! — the data you saw just a moment earlier now vanishes.

So how DOES one know when a table is populated? Popular ‘wisdom’ provides several possible answers:

Table statistics
The High-Water Mark (HWM)
The aforementioned “select count(*) from …”
Querying DBA_EXTENTS

How many of those are reliable? Let’s find out.

Table statistics are a really good indicator of whether or not a table is populated, until, well, it isn’t populated any more:

SQL> create table blarzenflotz(
  2        glerm   number,
  3        snangul varchar2(40),
  4        borm    number
  5  );

Table created.

SQL>
SQL> begin
  2        for i in 1..1000000 loop
  3         insert into blarzenflotz
  4         values(i, 'Schnerkenporf vasul '||i, mod(i*47, 3));
  5        end loop;
  6
  7        commit;
  8
  9  end;
 10  /

PL/SQL procedure successfully completed.

SQL>
SQL> begin
  2        for i in 1..1000000 loop
  3         insert into blarzenflotz
  4         values(i, 'Schnerkenporf vasul '||i, mod(i*47, 3));
  5        end loop;
  6
  7        commit;
  8
  9  end;
 10  /

PL/SQL procedure successfully completed.

SQL>
SQL> exec dbms_stats.gather_table_stats(ownname=>null, tabname=>'BLARZENFLOTZ', estimate_percent => 100)

PL/SQL procedure successfully completed.

SQL>
SQL> select count(*)
  2  from blarzenflotz;

  COUNT(*)
----------
   2000000

SQL>
SQL> select num_rows
  2  from user_tables
  3  where table_name = 'BLARZENFLOTZ';

  NUM_ROWS
----------
   2000000

SQL>
SQL> delete from blarzenflotz;

2000000 rows deleted.

SQL>
SQL> select count(*)
  2  from blarzenflotz;

  COUNT(*)
----------
         0

SQL>
SQL> select num_rows
  2  from user_tables
  3  where table_name = 'BLARZENFLOTZ';

  NUM_ROWS
----------
   2000000

SQL>

Without gathering new statistics after such an operation (and that’s not likely to occur since statistics gathering is usually a scheduled task) you can see how deceiving they can be. There are no rows in the table, yet the statistics state otherwise. Hmmmm. A similar fate befalls you if someone else, in another session, is deleting rows from the table of interest and has yet to commit the deed. You’ll go happily along, ‘knowing’ there are two-million rows in the table, blissfully unaware that Morgenstern Slapdragon has just deleted every last one of them. Your ‘select count(*) from …’ query returns the expected 2000000 as a result, and old Morgenstern, running the same query, reports that nothing is left of that data. Sneak in a ‘commit’ and now both queries return what Morgenstern knew five minutes ago — the data in that table is gone.

Will querying DBA_EXTENTS tell you anything of worth with respect to the actual data population? I’m afraid not, as that view will only tell you that X extents have been allocated to the object/segment; there is no information in that view to report if those extents are actually populated.

The high-water mark won’t tell you anything, either, as that is the end of the blocks which have or did have data in them. The high-water mark is not reset by deletes, only by a truncate, so deleting all of the data in a table won’t move the HWM, and using that to indicate data population is as reliable as the statistics or the ‘select count(*) from …’ query in a multi-user environment. And even using ROWID is suspect, as those are retrieved from the UNDO blocks to return a consistent data set regardless of the state or number of uncommitted transactions against the table or tables queried.

So how DO you know when a table is populated? You play your cards and you take your chances. Odds are in your favor such tools as described here will provide a good idea whether a table is populated or not. Just remember that because of Oracle’s read consistency mechanism sometimes those results could be wrong.

But the odds of the results being correct are far better than any you can get in Las Vegas.

October 1, 2008

I Need More Stuff

Filed under: General — dfitzjarrell @ 13:10

Careful coding is a must in application development but, occasionally, the best laid plans go astray. Enter the ORA-00947 error, caused by queries and insert statements having too few values. Let’s look at a few examples of how this can happen.

A simple query can become a nightmare if the subquery doesn’t return the proper number of values:

SQL> select e.empno, e.ename, d.loc, m.ename
  2  from emp e , emp m, dept d
  3  where m.empno = e.mgr
  4  and d.deptno = e.deptno
  5  and e.deptno = m.deptno
  6  and (d.deptno, d.dname) in (select deptno from emp)
  7  /
and (d.deptno, d.dname) in (select deptno from emp)
                            *
ERROR at line 6:
ORA-00947: not enough values 

SQL>

Yes, this was an obvious example, but it does illustrate the point. Insert statements are also not immune to returning such an error, through the values clause:

SQL> create table yingsplut(
  2        hargen number,
  3        neebo  number,
  4        snerm  varchar2(75)
  5  );

Table created.

SQL> 
SQL> insert into yingsplut
  2  values (17, 3)
  3  /
insert into yingsplut
            *
ERROR at line 1:
ORA-00947: not enough values 


SQL>

Or through a select statement:

SQL> insert into yingsplut
  2  select empno, sal from emp
  3  /
insert into yingsplut
            *
ERROR at line 1:
ORA-00947: not enough values 


SQL> 

Materialized view refreshes can also be affected if the materialized view was created with ‘select * from …’ and the source table definition has changed since the materialized view was created. The solution here is to recreate the materialized view using a query listing all of the desired columns explicitly. Yes, in some cases that won’t stop base table definition changes from ‘hosing’ your refresh, but simply adding columns to a base table definition won’t result in the refresh job generating an error.

Normally this error wouldn’t arise from such simple code; having a table with a large number of columns, though, can easily provide insert statements that miss the mark by one or more values simply because the developer lost count of default or NULL values:

SQL> create table valtest(
  2        val_key number,
  3        val1    varchar2(12),
  4        val2    varchar2(12),
  5        val3    varchar2(12),
  6        val4    varchar2(12),
  7        val5    varchar2(12),
  8        val6    varchar2(12),
  9        val7    varchar2(12),
 10        val8    varchar2(12),
 11        val9    varchar2(12),
 12        val10   varchar2(12),
 13        val11   varchar2(12),
 14        val12   varchar2(12),
 15        val13   varchar2(12),
 16        val14   varchar2(12),
 17        val15   varchar2(12),
 18        val16   varchar2(12),
 19        val17   varchar2(12),
 20        val18   varchar2(12),
 21        val19   varchar2(12),
 22        val20   varchar2(12),
 23        val21   varchar2(12),
 24        val22   varchar2(12),
 25        val23   varchar2(12),
 26        val24   varchar2(12),
 27        val25   varchar2(12),
 28        val26   varchar2(12),
 29        val27   varchar2(12),
 30        val28   varchar2(12),
 31        val29   varchar2(12),
 32        val30   varchar2(12),
 33        val31   varchar2(12),
 34        val32   varchar2(12),
 35        val33   varchar2(12),
 36        val34   varchar2(12),
 37        val35   varchar2(12),
 38        val36   varchar2(12),
 39        val37   varchar2(12)
 40  );

Table created.

SQL> 
SQL> insert into valtest
  2  values (1,
  3        'This',
  4        'is',
  5        'the',
  6        'first',
  7        'record',
  8        'inserted',
  9        'into',
 10        'my',
 11        'table',
 12        'and',
 13        'it''s',
 14        'a',
 15        'really',
 16        'long',
 17        'one',
 18        'because',
 19        'I',
 20        'wanted',
 21        'to',
 22        'be',
 23        'as',
 24        'difficult',
 25        'as',
 26        'possible',
 27        NULL,
 28        NULL,
 29        NULL,
 30        NULL,
 31        NULL);
insert into valtest
            *
ERROR at line 1:
ORA-00947: not enough values 


SQL> 

PL/SQL blocks are not immune to this, either, when using BULK COLLECT if the collection is defined absent a column or two:

SQL> create table lotsa_data(
  2        data_id number,
  3        data_set number,
  4        data_val varchar2(40),
  5        proc_dt  date
  6  );

Table created.

SQL> 
SQL> create table less_data(
  2        data_id number,
  3        data_set number,
  4        data_val varchar2(40)
  5  );

Table created.

SQL> 
SQL> create table ref_data(
  2        data_id number,
  3        data_set number
  4  );

Table created.

SQL> 
SQL> begin
  2        for i in 1..10000 loop
  3         insert into lotsa_data
  4         values(i, mod(i, 17), 'Test data statement '||i, sysdate);
  5         insert into ref_data
  6         values(i, mod(i,17));
  7        end loop;
  8  
  9        commit;
 10  
 11  end;
 12  /

PL/SQL procedure successfully completed.

SQL> 
SQL> declare
  2        type ldat_tab_typ is table of less_data%rowtype index by binary_integer;
  3  
  4        d_tab ldat_tab_typ;
  5  begin
  6        update lotsa_data
  7        set data_id = data_id + 10
  8        where data_set = 16
  9        returning data_id, data_set, data_val, proc_dt bulk collect into d_tab;
 10  
 11        for i in 1..d_tab.count loop
 12         dbms_output.put_line(d_tab(i).data_id||'  '||d_tab(i).data_set||' '||d_tab(i).data_val);
 13        end loop;
 14  end;
 15  /
        returning data_id, data_set, data_val, proc_dt bulk collect into d_tab;
                                                                              *
ERROR at line 9:
ORA-06550: line 9, column 79: 
PL/SQL: ORA-00947: not enough values 
ORA-06550: line 6, column 9: 
PL/SQL: SQL Statement ignored 


SQL> 

[If you go the other way, and have too many columns in your collection an ORA-00913 appears:

SQL> declare
  2        type ldat_tab_typ is table of lotsamore_data%rowtype index by binary_integer;
  3  
  4        d_tab ldat_tab_typ;
  5  begin
  6        update lotsa_data
  7        set data_id = data_id + 10
  8        where data_set = 16
  9        returning data_id, data_set, data_val, proc_dt bulk collect into d_tab;
 10  
 11        for i in 1..d_tab.count loop
 12         dbms_output.put_line(d_tab(i).data_id||'  '||d_tab(i).data_set||' '||d_tab(i).data_val);
 13        end loop;
 14  end;
 15  /
        returning data_id, data_set, data_val, proc_dt bulk collect into d_tab;
                                                                              *
ERROR at line 9:
ORA-06550: line 9, column 79: 
PL/SQL: ORA-00913: too many values 
ORA-06550: line 6, column 9: 
PL/SQL: SQL Statement ignored 


SQL> 

The solution to both conditions is the same, so read on.]

Troubleshooting such code is a time-consuming process, as the offending VALUES statements, SELECT queries or collection variables need to be carefully examined and the argument count/column count verified with the number of columns in the inserted table. Let’s take the very first example and fix it:

SQL> create table ref_dept
  2  as select deptno, dname, loc
  3  from dept
  4  where deptno in (10, 30, 70);

Table created.

SQL> 
SQL> select e.empno, e.ename, d.loc, m.ename
  2  from emp e , emp m, dept d
  3  where m.empno = e.mgr
  4  and d.deptno = e.deptno
  5  and e.deptno = m.deptno
  6  and (d.deptno, d.dname) in (select deptno, dname from ref_dept)
  7  /

     EMPNO ENAME      LOC           ENAME     
---------- ---------- ------------- ----------
      7934 MILLER     NEW YORK      CLARK     
      7782 CLARK      NEW YORK      KING      
      7499 ALLEN      CHICAGO       BLAKE     
      7654 MARTIN     CHICAGO       BLAKE     
      7900 JAMES      CHICAGO       BLAKE     
      7844 TURNER     CHICAGO       BLAKE     
      7521 WARD       CHICAGO       BLAKE    

7 rows selected.

SQL> 

Notice we created a new reference table, and populated it with the desired values. The reference table was then used in the subquery to return the proper number of values, along with the correct department numbers and names, to return the expected result set. Sharp eyes will note that we didn’t need the reference table as we could have queried the DEPT table again with the qualifying WHERE clause:

SQL> select e.empno, e.ename, d.loc, m.ename
  2  from emp e , emp m, dept d
  3  where m.empno = e.mgr
  4  and d.deptno = e.deptno
  5  and e.deptno = m.deptno
  6  and (d.deptno, d.dname) in (select deptno, dname from dept where deptno in (10,30,70))
  7  /

     EMPNO ENAME      LOC           ENAME     
---------- ---------- ------------- ----------
      7934 MILLER     NEW YORK      CLARK 
      7782 CLARK      NEW YORK      KING  
      7499 ALLEN      CHICAGO       BLAKE 
      7654 MARTIN     CHICAGO       BLAKE
      7900 JAMES      CHICAGO       BLAKE 
      7844 TURNER     CHICAGO       BLAKE 
      7521 WARD       CHICAGO       BLAKE 

7 rows selected.

SQL> 

Correcting the INSERT statement takes a bit more time, since the column count and value count must match. But time and patience are rewarded:

SQL> insert into valtest
  2  values (1,
  3        'This',
  4        'is',
  5        'the',
  6        'first',
  7        'record',
  8        'inserted',
  9        'into',
 10        'my',
 11        'table',
 12        'and',
 13        'it''s',
 14        'a',
 15        'really',
 16        'long',
 17        'one',
 18        'because',
 19        'I',
 20        'wanted',
 21        'to',
 22        'be',
 23        'as',
 24        'difficult',
 25        'as',
 26        'possible',
 27        'since',
 28        'my',
 29        'goldfish',
 30        'can''t',
 31        'dance',
 32        'after',
 33        'thirteen',
 34        'cups',
 35        'of',
 36        'grape',
 37        'Kool-Aid',
 38        'from',
 39        'Duluth');

1 row created.

SQL> 

Fixing the last example takes nothing more than changing the PL/SQL table to contain all of the referenced columns:

SQL> declare
  2        type ldat_tab_typ is table of lotsa_data%rowtype index by binary_integer;
  3
  4        d_tab ldat_tab_typ;
  5  begin
  6        update lotsa_data
  7        set data_id = data_id + 10
  8        where data_set = 16
  9        returning data_id, data_set, data_val, proc_dt bulk collect into d_tab;
 10
 11        for i in 1..d_tab.count loop
 12         dbms_output.put_line(d_tab(i).data_id||'  '||d_tab(i).data_set||' '||d_tab(i).data_val);
 13        end loop;
 14  end;
 15  /
349  16  Test data statement 339
366  16  Test data statement 356
383  16  Test data statement 373
400  16  Test data statement 390
[... much more output here ...]
9393  16  Test data statement 9383
9410  16  Test data statement 9400
9427  16  Test data statement 9417
9444  16  Test data statement 9434
9461  16  Test data statement 9451
9478  16  Test data statement 9468
9495  16  Test data statement 9485
9512  16  Test data statement 9502
9529  16  Test data statement 9519

PL/SQL procedure successfully completed.

SQL>

There is nothing more satisfying than the joy of a job well done. And there’s nothing more frustrating than having an error like an ORA-00947 which somehow eludes correction. Sometimes it’s best to let another pair of eyes peruse the code, as a fresh viewpoint can help reveal mistakes often overlooked by the original programmer, and can be of great assistance in fixing any problematic code. I’m not afraid to ask for help. You shouldn’t be, either.

Now, let’s code!

Blog at WordPress.com.