Some of the strangest ideas (to me, anyway) can be found in the depths of PL/SQL code, be it a package, a procedure, a function or an anonymous block. Granted, such decisions may be the result of budgetary considerations which prevented a more robust and reliable application design. I was looking at some code just recently that made me scratch my head in wonder and disbelief. Let’s look at a similar piece of PL/SQL that does the basically the same thing and see why it’s inefficient and can cause performance problems.
Our example begins with a seemingly innocuous piece of code:
... for u_cur in (select rowid rid, user_id from master_user_list where usr_id_status = 'Available') loop -- -- Check if someone has this record locked -- -- This can fail to identify 'locked' records since it relies -- on succesful completion of a transaction -- -- And some other session could be actively updating this record as -- your session is trying to use it -- begin select rowid, user_id into v_row_id, p_user.user_id from master_user_list where usr_id_status = 'Available' and rowid = u_cur.rid for update of usr_id_status, usr_id_status_cd skip locked; exception -- -- This should throw an error immediately, rather than -- waiting for more procedural/transactional code -- to execute -- when no_data_found then null; end; -- If not then exit the loop exit when p_user.user_id is not null; end loop; -- -- Check to verify a user_id was found -- -- Interesting, since we just found an available -- user_id in the previous block of code ... -- if p_user.user_id is not null then -- -- Update the master table to show this user_id is now taken -- -- Since two sessions can find the same 'available' user_id and use it -- this isn't efficient nor is it reliable -- begin update master_user_list set usr_id_status = 'Used', usr_id_status_cd = 'U', dt_id_iss = v_date where rowid = v_row_id; exception when others then rollback; p_mess_txt := sqlerrm; return(-93); end; else rollback; p_mess_txt := 'No available user_id.'; return(-91); end if; else -- The user_id was specified, make sure it is reserved and available. Return error if it isn't. -- -- Obviously the 'select ... for update of ... skip locked;' statement prevents two sessions from -- using the same user_id found in the first part of the code -- begin select rowid into v_row_id from master_user_list where usr_id_status = 'Reserved' and user_id = p_user.user_id for update of usr_id_status, usr_id_status_cd skip locked; exception -- -- And this code should throw an error immediately, rather than waiting until later in the code -- to do so -- when no_data_found then null; end; -- -- Check to verify a user_id was found -- -- If the exception handler actually threw an error rather than obscuring it this next section of code -- probably wouldn't be needed as a separate block -- -- It is surprising such code works since the v_got_one variable is declared yet never set before it's -- used to see if the user_id has been 'found' -- if v_got_one is not null then -- Update the master table to show this user_id is now taken begin update master_user_list set usr_id_status = 'Used', usr_id_status_cd = 'U', dt_id_iss = v_date where rowid = v_row_id; exception when others then rollback; p_mess_txt := sqlerrm; return(-3); end; else rollback; p_mess_txt := 'The user id '||p_user.user_id||' is not available'; return(-92); end if; end if; ...
This seems, to me, to be a lot of work to find a usable USER_ID, and it doesn’t even access the actual APPUSER table to check if the user_id is in use. Rewriting the initial select statement that ‘finds’ available user_id values could help by eliminating the redundant steps to return such user_ids. The real issue is that after all of that work to reveal the next “available” user_id the insert into the APPUSER table can fail:
-- Create new user record begin insert into appuser(user_id, alternate_id, first_name, last_name, password_hash, date_of_birth, gender, ethnicity, created_by, create_date, upper_user_id ) values (p_user.user_id, p_user.alternate_id, p_user.first_name, p_user.last_name, p_user.pwd, p_user.birthdate, p_user.gender, p_user.ethnicity, v_user, v_date, upper(p_user.user_id) ); exception when dup_val_on_index then rollback; p_mess_txt := 'Duplicate value in the APPUSER table'; return (-111); when others then rollback; p_mess_txt := sqlerrm; return (-112); end;
This is a lot of code to execute just to end up with no new user created because the USER_ID it ‘found’ was also found and taken by another session while this session was working to take it. It truly is the ‘long way around’ to try to ensure data integrity; after all of that work Oracle can still prevent you from inserting a new user record because the primary key constraint simply won’t let you.
This illustrates the difficulty of writing transactional code that can scale and can allow multiple concurrent sessions to perform work. The logic shows thought; given that assigning a new user_id relies on a table containing ‘acceptable’ user_ids for the application makes it difficult to implement without going through all of the motions shown above. Yet, even after all of those checks it still can result in failure to assign a user_id because some other session got to it first.
Can this be improved? Possibly, but it’s still relying on the results of a select statement that could return yet-to-be-committed values that have been updated as used by one session as another session is making its selection, even though it’s referencing both the source and destination tables to find available values:
... for u_cur in (select mul.rowid rid, mul.user_id from master_user_list mul left outer join appuser au on au.user_id = mul.user_id where usr_id_status = 'Available' and au.user_id is null) loop ...
This is the beauty and bane of Oracle’s read consistency mechanism which relies on undo to ‘reconstruct’ the data image as of the time the query began. Uncommitted inserts and updates can ‘sneak in’ allowing two or more sessions to ‘take aim’ at the same value, until one session commits, at which time subsequent inserts fail. Updates are no safer, since sessions can queue up to update the same record in a table. When that happens the last update to that record is the update the world sees, and changes made prior to that transaction are ‘lost’ as values updated by other sessions are overwritten. True, you can use DBMS_LOGMNR to resurrect those transactions in older releases of Oracle and flashback technology in 10.2 and later releases allows you to see data as it was within the scope of the undo_retention. It isn’t an easy problem to solve.
Sometimes there isn’t a way around ‘rolling your own’ logic to process application data. Of course the fewer times you have to do that the better off you are since you can rely solely on primary key constraints to ensure the data integrity (which is good since the constraints aren’t transactional).
“Now you see me, now you don’t” — it can happen when seemingly simple tasks take on a level of complexity dictated by the application design. Using pre-defined user_id values, for instance, like the code shown above, make things more complicated than they need to be, in my opinion. Using generated values with a specific template and a sequence might make the process easier and more efficient, but won’t allow for user_ids that have been pre-allocated; there may be a very good reason to have such user_ids as those values may be assigned reserved, administrative privileges in the application. Some may insist that using a template and a sequence can’t restrict the total number of users who can access an application, but thats not true as a sequence can have a maximum value which would prevent any user accounts from being created once that maxiumum sequence value has been reached:
SQL> create sequence mynewseq 2 start with 1 increment by 1 nocycle maxvalue 6 nocache; Sequence created. SQL> SQL> select mynewseq.nextval from dual; NEXTVAL ---------- 1 SQL> select mynewseq.nextval from dual; NEXTVAL ---------- 2 SQL> select mynewseq.nextval from dual; NEXTVAL ---------- 3 SQL> select mynewseq.nextval from dual; NEXTVAL ---------- 4 SQL> select mynewseq.nextval from dual; NEXTVAL ---------- 5 SQL> select mynewseq.nextval from dual; NEXTVAL ---------- 6 SQL> select mynewseq.nextval from dual; select mynewseq.nextval from dual * ERROR at line 1: ORA-08004: sequence MYNEWSEQ.NEXTVAL exceeds MAXVALUE and cannot be instantiated SQL>
Using an exception handler to trap and deal with the ORA-08004 error would allow the application to gracefully exit when the sequence has been exhausted.
It’s tough finding yourself between a rock and a hard place when writing application code. Hopefully the developer will consider all possible options and choose the least ‘undesirable’, even though it’s not the ideal solution. It’s better to see such limitations before the applicaton design is implemented and such flaws become painfully apparent, but that’s not always possible as developers may join a project so far in progress that a re-design is not possible. Unfortunately cost considerations can also interfere, causing design shortcuts that affect how the application is written. In the end it may be a ‘do what you can with what you have’ scenario and you must accept that the ‘solution’ isn’t ideal. Of course we can all hope that doesn’t happen, but this is the real world and sometimes restrictive budgets win over proper coding and design.
When life gives you lemons, don’t make lemonade, everybody else does that. Be creative, and make pie. Who knows, you just might improve the situation by showing your resourcefulness.