Always use direct I/O with Oracle

Donald K. Burleson's picture
articles: 

Many Oracle shops are plagued with slow I/O intensive databases, and this tip is for anyone whose STATSPACK top-5 timed events shows disk I/O as a major event.

Example Statspack output:

Top 5 Timed Events
 % Total
Event Waits Time (s) Ela Time
--------------------------- ------------ ----------- -----------
db file sequential read 2,598 7,146 48.54
db file scattered read 25,519 3,246 22.04
library cache load lock 673 1,363 9.26
CPU time 2,154 934 7.83
log file parallel write 19,157 837 5.68

This tip is important to you if you have reads waits in your top-5 timed events. Remember, if disk I/O is not your bottleneck then making it faster WILL NOT improve performance.

Also, this is a OS-level solution, and often I/O-bound Oracle databases can be fixed by tuning the SQL to reduce unnecessary large-table full-table scans. I monitor file I/O using the stats$filestatxs view:

For optimal disk performance, Oracle should always use direct I/O to its data files, bypassing any caching at the OS layer. This must be enabled both in Oracle and in the operating system. For information about Oracle direct I/O, refer to this URL by Steve Adams:

* http://www.ixora.com.au/notes/filesystemio_options.htm

The debate:

Oracle supports multiple blocksized tablespaces in a single database including 2k, 4k, 8k, 16k and 32k blocksizes.

The debate is about the benefit of different blocksizes (e.g. 8k, 32k) for reducing Oracle physical disk I/O and about whether the SA is likely to configure the Oracle server to use direct I/O for Oracle.

Some claim that placing large-related things (i.e. indexes) in a large blocksize results in a single physical fetch (less disk I/O), but others say this is NOT true because the OS blocksize (and the JFS cache) result in multiple OS I/O.

According to this web page on direct I/O by Steve Adams:

"It also avoids performance problems associated with using database block sizes that do not match the file system buffer size exactly. (more).

Whether direct I/O is available, and the way in which it can be enabled, are dependent on the operating system and file system type in use. Direct I/O can sometimes be enabled for an entire file system, using a file system mount option. Application programs may also be able to request direct I/O for a particular file by setting a flag when opening the file."

Adams goes on to note that direct I/O can increase the size of the RAM available for the working set in the Oracle data buffers:

"By switching to raw or direct I/O and by giving Oracle the memory that would otherwise be used by the operating system to cache Oracle data, a much larger working set of data can be cached, and a much higher cache hit rate can be sustained with obvious performance benefits."

Oracle controls direct I/O with a parameter named filesystemio_options. According to this page by Steve Adams the filesystemio_options parameter must be set in order for Oracle to read data blocks directly from disk:

"Databases that use a combination of say raw log files and raw tempfiles with file system based datafiles may wish to use kernelized asynchronous I/O against the raw files, but to avoid inefficient threaded asynchronous I/O against the datafiles.

This can be done by allowing disk_asynch_io to default to TRUE, but setting filesystemio_options to either none or directIO."

Checking your Server

Methods for configuring the OS will vary depending on the operating system and file system in use. Here are some examples of quick checks that anyone can perform to ensure that you are using direct I/O:

  • Solaris - Look for a "forcedirectio" option. Oracle DBAs claim this option makes a huge difference in I/O speed for Sun servers. Here is the Sun documentation: http://docs.sun.com/db/doc/816-0211/6m6nc6713?a=view

  • AIX - Look for a "dio" option. Here is a great link for AIX direct I/O: http://www-106.ibm.com/developerworks/eserver/articles/DirectIO.html

  • Veritas VxFS - (including HP-UX, Solaris and AIX), look for "convosync=direct". It is also possible to enable direct I/O on a per-file basis using Veritas QIO; refer to the "qiostat" command and corresponding man page for hints. For HPUX, see Oracle on HP-UX - Best Practices.

  • Linux - Linux systems support direct I/O on a per-filehandle basis (which is much more flexible), and I believe Oracle enables this feature automatically. Someone should verify at what release Oracle started to support this feature (it is called O_DIRECT). See Kernel Asynchronous I/O (AIO) Support for Linux and this great OTN article: Talking Linux: OCFS Update.

Comments

You say to always use direct IO with Oracle. I tested direct I/O with our setup, and performance got noticeably worse.

I have found that the culprit in a few databases is tables that need purging or better indexing of columns. The memory/IO hogs can be seen with the following query. When I create the indexes or purge rows, I find my db block hit ratio jumping and the physical reads per hour dropping and the meg of memory in the following query dropping:

COLUMN OBJECT_in_memory FORMAT A40
column meg_in_memory format 9999999999
SELECT COUNT(*)*8192/1024/1024 meg_in_memory,
o.OBJECT_NAME Object_in_Memory
FROM DBA_OBJECTS o, V$BH bh
WHERE o.DATA_OBJECT_ID = bh.OBJD
AND o.OWNER != 'SYS'
GROUP BY o.OBJECT_NAME
having count(*)>0
ORDER BY COUNT(*);