Oracle Tips and Tricks — David Fitzjarrell

March 31, 2017

You’re A Natural

Filed under: General — dfitzjarrell @ 08:44

"'Why is it,' he said quietly, 'that quite often even the things which are correct just don't seem to be right?'"
-- Norton Juster, The Phantom Tollbooth

A YouTube video is currently being promoted in the database community regarding joins, including the variety of joins available in most database engines (the engine referred to in the video is Oracle). A very good discussion ensues covering inner, left outer, right outer, full outer and cross joins. Notably absent (and, conceivably, of limited use) is the natural join, used when the join columns have the same name (and, hopefully, the same definition). Let’s look at the natural join and what it can, and cannot, do.

The following example sets up the conditions for a successful natural join: two tables with a common column that will facilitate the use of the natural join. Notice that in a natural join columns cannot have a prefix; the natural join returns the selected columns from all tables in the join which can make things a bit confusing when writing such a select list. We begin with a simple ‘select *’ query:


SQL> create table yooper(
  2  snorm   number,
  3  wabba   varchar2(20),
  4  oplunt  date);

Table created.

SQL> 
SQL> create table amplo(
  2  snorm   number,
  3  fleezor date,
  4  smang   varchar2(20),
  5  imjyt   varchar2(17));

Table created.

SQL> 
SQL> begin
  2  	     for i in 1..10 loop
  3  		     insert into yooper
  4  		     values(i, 'Quarzenfleep '||i, sysdate +i);
  5  		     insert into amplo
  6  		     values(i, sysdate -i, 'Erblo'||i, 'Zaxegoomp'||i);
  7  	     end loop;
  8  
  9  	     commit;
 10  end;
 11  /

PL/SQL procedure successfully completed.

SQL> 
SQL> select *
  2  from yooper natural join amplo;

     SNORM WABBA                OPLUNT    FLEEZOR   SMANG                IMJYT
---------- -------------------- --------- --------- -------------------- -----------------
         1 Quarzenfleep 1       31-MAR-17 29-MAR-17 Erblo1               Zaxegoomp1
         2 Quarzenfleep 2       01-APR-17 28-MAR-17 Erblo2               Zaxegoomp2
         3 Quarzenfleep 3       02-APR-17 27-MAR-17 Erblo3               Zaxegoomp3
         4 Quarzenfleep 4       03-APR-17 26-MAR-17 Erblo4               Zaxegoomp4
         5 Quarzenfleep 5       04-APR-17 25-MAR-17 Erblo5               Zaxegoomp5
         6 Quarzenfleep 6       05-APR-17 24-MAR-17 Erblo6               Zaxegoomp6
         7 Quarzenfleep 7       06-APR-17 23-MAR-17 Erblo7               Zaxegoomp7
         8 Quarzenfleep 8       07-APR-17 22-MAR-17 Erblo8               Zaxegoomp8
         9 Quarzenfleep 9       08-APR-17 21-MAR-17 Erblo9               Zaxegoomp9
        10 Quarzenfleep 10      09-APR-17 20-MAR-17 Erblo10              Zaxegoomp10

10 rows selected.

SQL> 
>

In this case the common column not only has the same name but also the same data type. When processing such queries Oracle returns the join column data from the first table listed in the join. Since the column has the same name and definition in this example it really doesn’t matter which table the join column data comes from, but in the next example there will be a difference. Let’s drop the existing tables and recreate them, this time with one table having a number data type and the other having a VARCHAR2 data type. The data in both tables will be the same (although the numbers will be stored as characters in the VARCHAR2 column). Since there are no alpha characters in the VARCHAR2 column the implicit TO_NUMBER() conversion succeeds; the difference in the output is the SNORM column (the commonly-named join column) printed is the character data, not the numeric, but, as mentioned previously, that is dependent on the table order in the join:


SQL> drop table yooper purge;

Table dropped.

SQL> drop table amplo purge;

Table dropped.

SQL> 
SQL> create table yooper(
  2  snorm   number,
  3  wabba   varchar2(20),
  4  oplunt  date);

Table created.

SQL> 
SQL> create table amplo(
  2  snorm   varchar2(10),
  3  fleezor date,
  4  smang   varchar2(20),
  5  imjyt   varchar2(17));

Table created.

SQL> 
SQL> begin
  2  	     for i in 1..10 loop
  3  		     insert into yooper
  4  		     values(i, 'Quarzenfleep '||i, sysdate +i);
  5  		     insert into amplo
  6  		     values(i, sysdate -i, 'Erblo'||i, 'Zaxegoomp'||i);
  7  	     end loop;
  8  
  9  	     commit;
 10  end;
 11  /

PL/SQL procedure successfully completed.

SQL> 
SQL> select *
  2  from yooper natural join amplo;

SNORM      WABBA                OPLUNT    FLEEZOR   SMANG                IMJYT
---------- -------------------- --------- --------- -------------------- -----------------
1          Quarzenfleep 1       31-MAR-17 29-MAR-17 Erblo1               Zaxegoomp1
2          Quarzenfleep 2       01-APR-17 28-MAR-17 Erblo2               Zaxegoomp2
3          Quarzenfleep 3       02-APR-17 27-MAR-17 Erblo3               Zaxegoomp3
4          Quarzenfleep 4       03-APR-17 26-MAR-17 Erblo4               Zaxegoomp4
5          Quarzenfleep 5       04-APR-17 25-MAR-17 Erblo5               Zaxegoomp5
6          Quarzenfleep 6       05-APR-17 24-MAR-17 Erblo6               Zaxegoomp6
7          Quarzenfleep 7       06-APR-17 23-MAR-17 Erblo7               Zaxegoomp7
8          Quarzenfleep 8       07-APR-17 22-MAR-17 Erblo8               Zaxegoomp8
9          Quarzenfleep 9       08-APR-17 21-MAR-17 Erblo9               Zaxegoomp9
10         Quarzenfleep 10      09-APR-17 20-MAR-17 Erblo10              Zaxegoomp10

10 rows selected.

SQL> 

Notice when the tables are reversed the SNORM data is again numeric:


SQL> select *
  2  from amplo natural join yooper;

     SNORM FLEEZOR   SMANG                IMJYT             WABBA                OPLUNT
---------- --------- -------------------- ----------------- -------------------- ---------
         1 29-MAR-17 Erblo1               Zaxegoomp1        Quarzenfleep 1       31-MAR-17
         2 28-MAR-17 Erblo2               Zaxegoomp2        Quarzenfleep 2       01-APR-17
         3 27-MAR-17 Erblo3               Zaxegoomp3        Quarzenfleep 3       02-APR-17
         4 26-MAR-17 Erblo4               Zaxegoomp4        Quarzenfleep 4       03-APR-17
         5 25-MAR-17 Erblo5               Zaxegoomp5        Quarzenfleep 5       04-APR-17
         6 24-MAR-17 Erblo6               Zaxegoomp6        Quarzenfleep 6       05-APR-17
         7 23-MAR-17 Erblo7               Zaxegoomp7        Quarzenfleep 7       06-APR-17
         8 22-MAR-17 Erblo8               Zaxegoomp8        Quarzenfleep 8       07-APR-17
         9 21-MAR-17 Erblo9               Zaxegoomp9        Quarzenfleep 9       08-APR-17
        10 20-MAR-17 Erblo10              Zaxegoomp10       Quarzenfleep 10      09-APR-17

10 rows selected.

SQL>

Adjusting the select list by using specific columns produces a smaller data set; notice that no table aliases or prefixes are used, which can make it difficult to keep track of what columns are coming from which table:


SQL> select smang, snorm, fleezor
  2  from yooper natural join amplo;

SMANG                SNORM      FLEEZOR
-------------------- ---------- ---------
Erblo1               1          29-MAR-17
Erblo2               2          28-MAR-17
Erblo3               3          27-MAR-17
Erblo4               4          26-MAR-17
Erblo5               5          25-MAR-17
Erblo6               6          24-MAR-17
Erblo7               7          23-MAR-17
Erblo8               8          22-MAR-17
Erblo9               9          21-MAR-17
Erblo10              10         20-MAR-17

10 rows selected.

SQL>

The natural join only requires the column names to be the same; the definitions can be different and as long as the data can be converted so a comparison can be made the query succeeds. Now let’s change the picture a bit more and store character strings in the one table while the other remains with numeric data:


SQL> drop table yooper purge;

Table dropped.

SQL> drop table amplo purge;

Table dropped.

SQL> 
SQL> create table yooper(
  2  snorm   number,
  3  wabba   varchar2(20),
  4  oplunt  date);

Table created.

SQL> 
SQL> create table amplo(
  2  snorm   varchar2(10),
  3  fleezor date,
  4  smang   varchar2(20),
  5  imjyt   varchar2(17));

Table created.

SQL> 
SQL> begin
  2  	     for i in 1..10 loop
  3  		     insert into yooper
  4  		     values(i, 'Quarzenfleep '||i, sysdate +i);
  5  		     insert into amplo
  6  		     values(i||'Bubba', sysdate -i, 'Erblo'||i, 'Zaxegoomp'||i);
  7  	     end loop;
  8  
  9  	     commit;
 10  end;
 11  /

PL/SQL procedure successfully completed.

SQL> 
SQL> select *
  2  from yooper natural join amplo;
select *
*
ERROR at line 1:
ORA-01722: invalid number


SQL> 
SQL> select *
  2  from amplo natural join yooper;
select *
*
ERROR at line 1:
ORA-01722: invalid number


SQL> 
SQL> drop table yooper purge;

Table dropped.

SQL> drop table amplo purge;

Table dropped.

SQL> 

Now the natural join fails to return data since the TO_NUMBER() conversion fails; it doesn’t matter which table is listed first in the natural join from this example as the conversion will be from the character string to a number.

The natural join will use all commonly named columns in the join condition, so let’s add another matching column to this example and see what happens:


SQL> create table yooper(
  2  snorm      number,
  3  fleezor date,
  4  wabba      varchar2(20),
  5  oplunt     date);

Table created.

SQL>
SQL> create table amplo(
  2  snorm      number(10),
  3  fleezor date,
  4  smang      varchar2(20),
  5  imjyt      varchar2(17));

Table created.

SQL>
SQL> begin
  2          for i in 1..10 loop
  3                  insert into yooper
  4                  values(i, sysdate-i, 'Quarzenfleep '||i, sysdate +i);
  5                  insert into amplo
  6                  values(i, sysdate -i, 'Erblo'||i, 'Zaxegoomp'||i);
  7          end loop;
  8
  9          commit;
 10  end;
 11  /

PL/SQL procedure successfully completed.

SQL>
SQL> select *
  2  from yooper natural join amplo;

     SNORM FLEEZOR   WABBA                OPLUNT    SMANG                IMJYT
---------- --------- -------------------- --------- -------------------- -----------------
         1 30-MAR-17 Quarzenfleep 1       01-APR-17 Erblo1               Zaxegoomp1
         2 29-MAR-17 Quarzenfleep 2       02-APR-17 Erblo2               Zaxegoomp2
         3 28-MAR-17 Quarzenfleep 3       03-APR-17 Erblo3               Zaxegoomp3
         4 27-MAR-17 Quarzenfleep 4       04-APR-17 Erblo4               Zaxegoomp4
         5 26-MAR-17 Quarzenfleep 5       05-APR-17 Erblo5               Zaxegoomp5
         6 25-MAR-17 Quarzenfleep 6       06-APR-17 Erblo6               Zaxegoomp6
         7 24-MAR-17 Quarzenfleep 7       07-APR-17 Erblo7               Zaxegoomp7
         8 23-MAR-17 Quarzenfleep 8       08-APR-17 Erblo8               Zaxegoomp8
         9 22-MAR-17 Quarzenfleep 9       09-APR-17 Erblo9               Zaxegoomp9
        10 21-MAR-17 Quarzenfleep 10      10-APR-17 Erblo10              Zaxegoomp10

10 rows selected.

SQL>
SQL> select *
  2  from amplo natural join yooper;

     SNORM FLEEZOR   SMANG                IMJYT             WABBA                OPLUNT
---------- --------- -------------------- ----------------- -------------------- ---------
         1 30-MAR-17 Erblo1               Zaxegoomp1        Quarzenfleep 1       01-APR-17
         2 29-MAR-17 Erblo2               Zaxegoomp2        Quarzenfleep 2       02-APR-17
         3 28-MAR-17 Erblo3               Zaxegoomp3        Quarzenfleep 3       03-APR-17
         4 27-MAR-17 Erblo4               Zaxegoomp4        Quarzenfleep 4       04-APR-17
         5 26-MAR-17 Erblo5               Zaxegoomp5        Quarzenfleep 5       05-APR-17
         6 25-MAR-17 Erblo6               Zaxegoomp6        Quarzenfleep 6       06-APR-17
         7 24-MAR-17 Erblo7               Zaxegoomp7        Quarzenfleep 7       07-APR-17
         8 23-MAR-17 Erblo8               Zaxegoomp8        Quarzenfleep 8       08-APR-17
         9 22-MAR-17 Erblo9               Zaxegoomp9        Quarzenfleep 9       09-APR-17
        10 21-MAR-17 Erblo10              Zaxegoomp10       Quarzenfleep 10      10-APR-17

10 rows selected.

SQL>

As with all inner joins only the matching data is returned so if YOOPER is reloaded with only the even-numbered records that will be seen in the output from the join:


SQL> truncate table yooper;

Table truncated.

SQL>
SQL> begin
  2          for i in 1..10 loop
  3                  if mod(i,2) = 0 then
  4                          insert into yooper
  5                          values(i, sysdate-i, 'Quarzenfleep '||i, sysdate +i);
  6                  end if;
  7          end loop;
  8
  9          commit;
 10  end;
 11  /

PL/SQL procedure successfully completed.

SQL>
SQL> select *
  2  from yooper natural join amplo;

     SNORM FLEEZOR   WABBA                OPLUNT    SMANG                IMJYT
---------- --------- -------------------- --------- -------------------- -----------------
         2 29-MAR-17 Quarzenfleep 2       02-APR-17 Erblo2               Zaxegoomp2
         4 27-MAR-17 Quarzenfleep 4       04-APR-17 Erblo4               Zaxegoomp4
         6 25-MAR-17 Quarzenfleep 6       06-APR-17 Erblo6               Zaxegoomp6
         8 23-MAR-17 Quarzenfleep 8       08-APR-17 Erblo8               Zaxegoomp8
        10 21-MAR-17 Quarzenfleep 10      10-APR-17 Erblo10              Zaxegoomp10

SQL>
SQL> select *
  2  from amplo natural join yooper;

     SNORM FLEEZOR   SMANG                IMJYT             WABBA                OPLUNT
---------- --------- -------------------- ----------------- -------------------- ---------
         2 29-MAR-17 Erblo2               Zaxegoomp2        Quarzenfleep 2       02-APR-17
         4 27-MAR-17 Erblo4               Zaxegoomp4        Quarzenfleep 4       04-APR-17
         6 25-MAR-17 Erblo6               Zaxegoomp6        Quarzenfleep 6       06-APR-17
         8 23-MAR-17 Erblo8               Zaxegoomp8        Quarzenfleep 8       08-APR-17
        10 21-MAR-17 Erblo10              Zaxegoomp10       Quarzenfleep 10      10-APR-17

SQL>

All data in the common columns must match to return data; if ids and dates in in YOOPER don’t match up with the ids and dates in AMPLO no data is returned:


SQL> truncate table yooper;

Table truncated.

SQL>
SQL> begin
  2          for i in 1..10 loop
  3                  if mod(i,2) = 0 then
  4                          insert into yooper
  5                          values(i-1, sysdate-i, 'Quarzenfleep '||i, sysdate +i);
  6                  end if;
  7          end loop;
  8
  9          commit;
 10  end;
 11  /

PL/SQL procedure successfully completed.

SQL>
SQL> select *
  2  from yooper natural join amplo;

No rows selected.

SQL>
SQL> select *
  2  from amplo natural join yooper;

No rows selected.

SQL>

There are matching id values, and there are matching date values between the tables but the combination of id and date produces no matching records. With a traditional inner join data could be returned based on either id values or date values (although how good those results would be is questionable, given the matching key structure).

A natural join isn’t a commonly used join type, mainly because the joined tables are not likely to contain join columns with the same name (the demonstration schema provided with Oracle is another good set of tables to use for a natural join). When such a condition exists a natural join is an option but testing is necessary to ensure that the results returned are both desirable and usable.

Just because it’s ‘correct’ doesn’t make it ‘right’. Right?

Advertisements

March 28, 2017

Finding Your Way

Filed under: General — dfitzjarrell @ 08:00

"Whether or not you find your own way, you're bound to find some way. If you happen to find my way, please return it,
as it was lost years ago. I imagine by now it's quite rusty."
-- Norton Juster, The Phantom Tollbooth

Oracle has provided access to its wait interface for several releases and with each new release it expands the range of wait information available, so much so that it’s hard to not find something to examine. Disk reads, logical reads, sort activity, table scans all vie for the attention of the DBA. Of course examination leads to investigation which leads, inevitably, to tuning, even when there is nothing to tune. Such constant twiddling and tweaking is known as Compulsive Tuning Disorder, or CTD. Unfortunately the more ways Oracle provides to interrogate the wait interface the more the DBA can fall victim to CTD. To help reduce the urge to tune a few questions need to be asked regarding the so-called ‘problem area’. Let’s dig in and ask those questions.

First, and foremost, is the following question:

“What problem are you trying to solve?”

If you can’t answer that question then there really isn’t a reason to tune anything; you’ll never know when you’re done and the task will go on and on and on and on and on … ad infinitum, ad nauseaum with no progress to report and no end in sight, another DBA sucked into the rabbit hole of CTD. One thing will lead to another and another and another as you find more areas to ‘tune’ based on the wait interface data and blog posts and articles clearly telling you something needs to be fixed. In most cases nothing could be further from the truth.

Next is misleading or misunderstood numbers, mainly in reference to data reads and writes. I’ve seen some DBAs try to tune the database to reduce logical reads — it’s usually newer DBAs who see the large values for logical reads and conclude there is a problem. The issue isn’t the available data, it’s isolating one small aspect of the entire performance picture and tuning that to the exclusion of everything else. Large volumes of logical reads aren’t necessarily a problem, unless the buffer cache is being reloaded across short periods of time which would be accompanied by large volumes of physical reads. In cases such as this the physical reads would be the telling factor and those MAY be cause for concern. It depends upon the system; an OLTP system grinding through tables to get a single row is most likely a problem whereas a data warehouse churning through that same volume of data would be normal. Going back to the large volume of logical reads relative to the physical reads that can be a problem of interpretation when taking each area individually as that may cloud the water and obscure the real issue of a buffer cache that may be too small for the workload; a configuration that once was more than sufficient can, over time, become a performance bottleneck as more and more users are using the database. A database is a changing entity and it needs to be tended, like a garden, if it’s going to grow.

The DBA needs to listen to the users since they will be the first to complain when something isn’t right and needs attention. Performance is time and, for business, time is money, and when tasks, over time, take longer and longer to complete less work is getting done. The DBA shouldn’t need to hunt for things to do; continually tuning to get that last microsecond of performance is really wasted effort — if no one but the DBA is going to notice the ‘improvement’ it’s not worth pursuing.

Not all tuning is bad or wasted effort but the DBA needs to have a clear goal in mind and a path to follow that addresses issues and brings them to some sort of resolution, even if it’s only a temporary fix until a permanent solution can be implemented. It does no good to constantly pick apart the database to find problems to solve, especially when the users aren’t complaining.

When something is wrong the DBA will hear about it; that’s the time to step into action and start problem solving. The DBA doesn’t need to go looking for problems, they’ll show up all by themselves. And if he or she isn’t constantly twiddling with this or tweaking that the real issues can be dealt with when they happen. Then the users will stop complaining and peace and joy will reign supreme. Okay, so peace and joy won’t necessarily cover the land but the users will stop complaining, at least for a while, and there will be benefit seen from the effort expended.

CTD is thankless, relentless and never-ending, so don’t get caught up in wanting to fix everything; it can’t be done and there are some things that are, most likely, not worth the effort spent given the small return that investment will generate. It’s not enough to know when to stop, the DBA also needs to know when NOT to start; if there is no clear destination to the journey it’s best to not begin. There is plenty to do without making work out of nothing.

Go your own way, just don’t get lost.

March 13, 2017

It’s Private

Filed under: General — dfitzjarrell @ 10:35

“The only thing you can do easily is be wrong, and that's hardly worth the effort.” 
― Norton Juster, The Phantom Tollbooth

Oracle provides two parameters that affect the PGA that look very similar but operate very differently. One of these parameters is the well-known pga_max_size and the otheris a hidden parameter, _pga_max_size. Let’s look at both and see how one can be very effective while the other can create problems with respect to PGA memory management,

DBAs know pga_max_size from extensive documentation from Oracle Corporation and from numerous Oracle professionals writing blog posts about it. It’s a common parameter to set to restrict the overall size of the PGA in releases 11.2 and later. It’s available if Automatic Memory Management (AMM) is not in use; databases running on Linux and using hugepages would be in this group since AMM and hugepages is not a supported combination. Hugepages are available for IPC (Inter-Process Communication) shared memory; this is the ‘standard’ shared memory model (starting with UNIX System V) allowing multiple processes to access the same shared memory segment. There is also another form of shared memory segment, the memory-mapped file, and currently such segments are not supported by hugepages. Oracle, on Linux, gives you a choice of using hugepages or memory-mapped files and you implement that choice by selecting to use (or not use) Automatic Memory Management (AMM). Using Automatic Shared Memory Management (ASMM) allows the DBA to set such parameters as sga_target, sga_max_size, pga_aggregate_target and pga_max_size and have some control how those memory areas are sized.

Using pga_max_size is a simple task:


SQL> alter system set pga_max_size=2G;

Systen altered.

SQL>

Now Oracle will do its best to limit the overall PGA size to the requested value but remember this is a targeted max size, not an absolute. It is more restrictive than pga_aggregate_target, meaning it’s less likely to be exceeded.

On to its sister parameter, _pga_max_size. This parameter regulates the size of the PGA memory allocated to a single process. Oracle sets this using calculations based on pga_aggregate_target and pga_max_size and, since it is an ‘undocumented’ parameter, it should NOT be changed at the whim of the DBA. Setting this to any value prevents Oracle from setting it based on its standard calculations and can seriously impact database performance and memory usage. If, for example, the DBA does this:


SQL> alter system set "_pga_max_size"=2G;

Systen altered.

SQL>

Oracle is now capable of allocating up to 2 GB of PGA to each and every process started after that change has taken place. On an exceptionally active and busy system, with parallel processing enabled, each process can have up to 2 GB of RAM in its PGA. Since many systems still don’t have terabytes of RAM installed such allocations can bring the database, and the server, to a grinding halt, throwing ORA-04030 errors in the process. This, of course, is not what the DBA intended but it is what the DBA enabled by altering the _pga_max_size parameter. Unfortunately this parameter (_pga_max_size) is still being written on in blogs that provide ‘information’, which hasn’t been validated, to the Oracle community.

Knowledge is power; unfortunately unverified ‘information’ is seen as knowledge (especially since it’s a common misconception that ‘if it’s on the Internet it MUST be true’ which isn’t always the case) by those who don’t apply critical thinking to what they read. I know of DBAs who set _pga_max_size to match the pga_max_size parameter and found, to their dismay, that their actions seriously impacted production systems in a negative way. Sometimes in the database world prolific authors are taken as experts and their words looked upon as gospel. Unfortunately prolific doesn’t necessarily mean reliable.

It’s always best to test what others tell you before assuming the advice given to you is right.

March 1, 2017

Return To Sender

Filed under: General — dfitzjarrell @ 16:06

"The most important reason for going from one place to another is to see what's in between."
-- Norton Juster, The Phantom Tollbooth

Recently in an Oracle forum a question resurfaced regarding enabling row movement for tables. The posted question, from five years ago, asked if row movement was safe and if there could be any ‘undesired impact on application data’. The answer to the first part of that question is ‘yes’ (it’s safe because Oracle, except under extreme conditions, won’t lose your data) and the answer to the second part is also ‘yes’. That may be confusing so let’s look at what the end result could be.

Data rows are uniquely identified by a construct known far and wide as the ROWID. ROWIDs contain a wealth of information as to the location of a given row; the file number, the block number and row number are all encoded in this curious value. Updates can change pretty much everything in a row except the ROWID and primary key values (and, yes, there’s a way to change PK values but it involves deleting and inserting the row — Oracle does this when, for some bizarre reason known only to the user making the change, a PK value is updated). The ONLY way to change a ROWID value is to physically move the row, which is what enabling row movement will allow. This is undesirable for the reasons listed below:


	* Applications coded to store ROWID values can fail as the data that was once in Location A is now in Location B.
	* Indexes will become invalid or unusable, requiring that they be rebuilt.

Storing ROWID values in application tables isn’t the wisest of choices a developer can make. Exporting from the source and importing into a new destination will automatically cause those stored ROWID values to be useless. Cloning the database via RMAN will do the same thing since ROWID values are unique only within the database where they are generated; they do not transport across servers or platforms. Consider two imaginary countries, Blaggoflerp and Snormaflop. Each is unique in geography so that locations found in Blaggoflerp are not found in Snormaflop, with the reverse also being true. If the traveler has but one map, of Blaggoflerp, and tries to use that to navigate Snormaflop our traveler will become hopelessly lost and confused. Enable row movement on a table where indexes are present, an application stores ROWIDs for easy data access, or both and Oracle starts singing that old Elvis Presley hit, written by Winfield Scott, “Return To Sender”:


Return to sender, address unknown.
No such person, no such zone.

Don’t misunderstand, the data is STILL in the table, it’s just moved from its original location and left no forwarding address. It’s possible that new data now occupies the space where that trusty old row used to live, so the application doesn’t break but it does return unexpected results because the values that were once at that location are no longer there. And any indexes that referenced that row’s original ROWID are now invalidated, making them useless until manual intervention is employed to rebuild them.


"Since you got here by not thinking, it seems reasonable to expect that, in order to get out, you must start thinking."
-- Norton Juster, The Phantom Tollbooth

Maybe it’s not that the DBA didn’t think about the act before he or she did it, it might simply be that he or she didn’t think far enough ahead about the results of such an act to make a reasonable choice. Changes to a database can affect downstream processing; failing to consider the ripple effect of such changes can be disastrous, indeed. It isn’t enough in this day and age to consider the database as a ‘lone ranger’; many systems can depend on a single database and essentially haphazard changes can stop them in their tracks.

There may be times when enabling row movement is necessary; changing a partitioning key is one of them. Granted making such changes on a partitioned table will be part of a larger outage window where the local and global indexes can be maintained so the impact will be positive, not negative. Absent such tasks (ones where row movement would be necessary) it’s not recommended to enable row movement as it will certainly break things, especially things no one was expecting because of lack knowledge of the affected systems.

It’s not always good to go travelling.

Blog at WordPress.com.