Oracle Spells Out Flaw in Its Core Data Management System

September 27, 2009

Another white paper on Bitpipe. Sigh. I get notices of these documents with mind numbing regularity. Most are thinly disguised apologia for a particular product in a congested market. I clicked on the link for the Line56 document “A Technical Overview of the Sun  Oracle Exadata Storage Server and Database Machine” and started speed reading. [To access this link you may have to backtrack and get a Bitpipe user name and password.] I made it to page 29 but a fish hook was tugging at my understanding. I back tracked and spotted the segment that caused a second, closer reading. The headline was “Today’s Limits on Database I/O” on page 2. Here’s the segment:

The Oracle Database provides an incredible amount of functionality to implement the most sophisticated OLTP and DW applications and to consolidate mixed workload environments. But to access terabytes databases with high performance, augmenting the smart database software with powerful hardware provides tremendous opportunities to deliver more database processing, faster, for the enterprise. Having powerful hardware to provide the required I/O rates and bandwidth for today’s applications, in addition to smart software, is key to the extreme performance delivered by the Exadata family of products. Traditional storage devices offer high storage capacity but are relatively slow and can not sustain the I/O rates for the transaction load the enterprise requires for its applications. Instead of hundreds of IOPS (I/Os per second) per disk enterprise applications require their systems deliver at least an order of magnitude higher IOPS to deliver the service enterprise end-users expect. This problem gets magnified when hundreds of disks reside behind a single storage controller. The IOPS that can be executed are severely limited by both the speed of the mechanical disk drive and the number of drives per storage controller.

After the expensive upgrades and the additional licenses, I wonder how Oracle shops are going to react to this analysis of the limits of the traditional Oracle data management system. Even more interesting to me is that the plumbing has not been fixed. The solution is more exotic hardware. Do I hear the tolling of the bell for the Codd database? I do hear the sound of more money being sucked into the the “old way”. Check out Aster Data or InfoBright. Might be useful.

Stephen Arnold, September 27, 2009


3 Responses to “Oracle Spells Out Flaw in Its Core Data Management System”

  1. Dave Menninger on September 28th, 2009 8:56 am

    I/O is clearly a bottleneck for data warehousing applications. You can either solve it through “more exotic hardware” as you say or use software designed to reduce the amount of I/O dramatically. Columnar databases, such as do just that.


  2. Eric Rogge on October 1st, 2009 11:35 am

    Hey Steve,
    I spent a number of years implementing BI and reporting tools against SQL databases. The usual drill would be spend a few hours creating the report or view. Spend the next week or more tuning the SQL database with the hope that the user community wouldn’t crush the server. Test. Decide the hardware didn’t scale. Buy more hardware. Rinse. Repeat. Reset user expectations. The problem with relational databases as that joins occur at query time. Row-oriented storage makes this worse. Not a great design for analytic queries. The OLAP option and Materialized Views are a bag on the side of the box. Faster, but difficult to mutate as analytic needs and data sets change. Definitely job security for those involved. My $.02.

  3. Stephen E. Arnold on October 1st, 2009 8:24 pm

    Eric Rogge,

    Thanks for the info.

    Stephen Arnold, October 1, 2009

  • Archives

  • Recent Posts

  • Meta