Sunday, September 09, 2012

Oracle database blocksize considerations

When creating a Oracle database (or new database file) most people quickly move over the question concerning the database blocksize. In most cases the standard for an Oracle database blocksize is selected, which is 8KB (8192 byte) or 4KB (4096 byte). In general this might be a good choice however it can be good to consider / reconsider your blocksize when you have a high demanding environment where database performance is of essence.

you have to make some considerations and you have to know the advantages and disadvantages of larger or smaller blocksizes for your Oracle database. from The Oracle 11.2 Database Performance Tuning Guide we can learn the following:

Smaller blocksize.
  Advantages;
  • Good for small rows with lots of random access.
  • Reduces block contention.
  Disadvantages;
  • Has relatively large space overhead due to metadata (that is, block header).
  • Not recommended for large rows. There might only be a few rows stored for each block, or worse, row chaining if a single row does not fit into a block.

Larger blocksize.
  Advantages;
  • Has lower overhead, so there is more room to store data.
  • Permits reading several rows into the buffer cache with a single I/O (depending on row size and block size).
  • Good for sequential access or very large rows (such as LOB data).

  Disadvantages;
  • Wastes space in the buffer cache, if you are doing random access to small rows and have a large block size. For example, with an 8 KB block size and 50 byte row size, you waste 7,950 bytes in the buffer cache when doing random access.
  • Not good for index blocks used in an OLTP environment, because they increase block contention on the index leaf blocks.
some guidelines are provided by Oracle when you have to make this consideration based upon the the fact if you have many read operations or if you have many write operations. Remember that you can state your blocksize per datafile so when you are desiging a architecture for your database you might consider to have write operations in a different datafile than you have the most of your read operations. You will have to make some considerations here and this might have effect on your application design however it is worth looking at your options.

Read operations;
Regardless of the size of the data, the goal is to minimize the number of reads required to retrieve the desired data.
  • If the rows are small and access is predominantly random, then choose a smaller block size.
  • if the rows are small and access is predominantly sequential, then choose a larger block size.
  • If the rows are small and access is both random and sequential, then it might be effective to choose a larger block size.
  • If the rows are large, such as rows containing large object (LOB) data, then choose a larger block size.
Write operations;
  • For high-concurrency OLTP systems, consider appropriate values for INITRANS, MAXTRANS, and FREELISTS when using a larger block size. These parameters affect the degree of update concurrency allowed within a block. However, you do not need to specify the value for FREELISTS when using automatic segment-space management.
  • If you are uncertain about which block size to choose, then try a database block size of 8 KB for most systems that process a large number of transactions. This represents a good compromise and is usually effective. Only systems processing LOB data need more than 8 KB.
A larger data block size provides greater efficiency in disk and memory I/O (access and storage of data). Therefore, consider specifying a block size larger than your operating system block size if the following conditions exist:
  • Oracle Database is on a large computer system with a large amount of memory and fast disk drives. For example, databases controlled by mainframe computers with vast hardware resources typically use a data block size of 4K or greater.
  • The operating system that runs Oracle Database uses a small operating system block size. For example, if the operating system block size is 1K and the default data block size matches this, the database may be performing an excessive amount of disk I/O during normal operation. For best performance in this case, a database block should consist of multiple operating system blocks.
You can simply check your current blocksize per datafile by executing the below SQL query:
 SELECT 
       name, 
       bytes, 
       blocks, 
       block_size 
 FROM 
     v$datafile

No comments: