Sometimes you need to conserve space on a file system, and using exp with a named pipe to the compress utility is a nice way to export large databases and consume the minimal amount of disk space. But sometimes things don’t happen the way you plan them. Sometimes the Sys Admin goes through the process of creating/allocating file systems and neglects the all-important task of asking if the intended user needs large file support. Oops. [To be completely fair, though, for years exp and imp would not read/generate a .dmp file larger than 2GB, and most operating systems wouldn’t allow a single file to be larger than 2 GB, period. It has only been recently that these limits changed. So, a ‘standard build’ of the operating system may still be using configurations left over from byegone days. I’d check with your system admistrator regarding how these UNIX systems are configured.] Using exp through a named pipe to compress then exhibits some ‘interesting’ behaviour, as evidenced below:
Connected to: Oracle9i Enterprise Edition Release 126.96.36.199.0 - 64bit Production JServer Release 188.8.131.52.0 - Production Export done in US7ASCII character set and UTF8 NCHAR character set server uses UTF8 character set (possible charset conversion) About to export the entire database ... . exporting tablespace definitions . exporting profiles . exporting user definitions . exporting roles . exporting resource costs . exporting rollback segment definitions . exporting database links . exporting sequence numbers . exporting directory aliases . exporting context namespaces . exporting foreign function library names . exporting PUBLIC type synonyms . exporting private type synonyms . exporting object type definitions . exporting system procedural objects and actions . exporting pre-schema procedural objects and actions . exporting cluster definitions . about to export SYSTEM's tables via Direct Path ... [usual list of users/tables here] . exporting synonyms . exporting views . exporting referential integrity constraints . exporting stored procedures . exporting operators . exporting indextypes . exporting bitmap, functional and extensible indexes . exporting posttables actions . exporting triggers . exporting materialized views . exporting snapshot logs . exporting job queues . exporting refresh groups and children . exporting dimensions . exporting post-schema procedural objects and actions . exporting user history table . exporting default and system auditing options . exporting statistics Export terminated successfully without warnings.
Notice that exp displays no errors – this is because it could successfully write to the named pipe even though the destination file written by compress isn’t complete; errors from any utility at the end of a named pipe don’t traverse the named pipe when using redirection and thus aren’t ‘registered’ by the originating application. [Since both ends of a named pipe must be open at the same time to write to it/read from it the flow goes ‘one way’, from the source to the destination.] We don’t discover this fact until we invoke imp on the newly created file, again through a named pipe. This time, though, imp encounters an incomplete dump file and complains, rather loudly:
Connected to: Oracle9i Enterprise Edition Release 184.108.40.206.0 - 64bit Production JServer Release 220.127.116.11.0 - Production Export file created by EXPORT:V09.02.00 via direct path import done in US7ASCII character set and UTF8 NCHAR character set import server uses UTF8 character set (possible charset conversion) [usual imp output here] IMP-00009: abnormal end of export file IMP-00018: partial import of previous table completed: 7405342 rows imported Import terminated successfully with warnings.
The destination file system allowed a maximum file size of 2 GB, causing compress to produce an incomplete archive exactly 2 GB in size. It is rare, indeed, that such a dump file will consume exactly 2 GB of disk space (although it can occur). Double oops. Again note that since exp was using a named pipe it reported successful completion even though the destination file had been truncated.
So how in the name of Soupy Sales do you discover this before you fail on import? Use imp to verify the file, but use a ‘non-invasive’ option to prevent imp from doing any more work than simply reading through the file. The show=y or indexfile options are suitable for such checks:
imp user/pass file=… show=y log=mycheck.log
imp user/pass file=… indexfile=mycheck.log
would verify a complete file was written. Of course you’ll need to use a named pipe, so the complete command would be:
mknod /tmp/imp_pipe p
uncompress /tmp/imp_pipe &
(Of course there would be other ‘bits and pieces’ of success checking and exits on error status, but this isn’t intended to be a complete script.) The parfile_imp_test file contains the import parameters in effect, including the use of /tmp/imp_pipe as the source ‘file’. It should look like this:
Another parameter option would be:
Either set of instructions would direct imp to simply read through the file until the end; errors generated due to a truncated file would be displayed. If, perchance, you receive the dreaded IMP-00009: abnormal end of export file error contact the System Administrator for the machine in question and bring this to his or her attention. Possibly a different file system on the same machine is configured to contain files larger than 2 GB allowing you to redirect the output to a more ‘receptive’ location. If not you’ll need to be notified when the problem is corrected; at that point you can run a new export and generate a usable compressed archive.