SAP and Oracle Chase Real Time

June 4, 2010

At the SLA Conference in New Orleans in a couple of weeks, I am talking about real time information processing. That paper will focus on a taxonomy of real time. Most folks use the phrase “real time” without placing it in context. Like much of the blather about finding information that is germane to a specific need, 20 somethings, azure chip consultants, and the formerly employed grad on to a buzzword. Thank goodness I am 65 and happy paddling quietly in the goose pond here in Harrod’s Creek.

I read “SAP, Oracle and Real Real Time Apps.” You should reach article, consider its argument, and make up your own mind about real, real time. For me, the killer passage was:

Forgive me for being skeptical, but I’ve been asking myself these last few weeks why a database vendor hasn’t come up with something along the lines of what SAP now says it will deliver. In-memory and column-oriented technologies have been around for years, and vendors like Sybase and Vertica have been talking about 10X to 100X data compression for nearly as long. Did it really take an application vendor to think outside the box of the database market as we know it? Has it really been beyond outfits as talented and well-funded as IBM and Teradata to tackle these problems? Or have the database vendors been protecting the status quote and certain revenue streams? It seems even Oracle’s OLTP- and OLAP-capable Exadata doesn’t aspire to replace the data warehouse layer as we know it.

I think this is on the same page with my thinking or maybe in the same chapter.

image

My view on SAP and Oracle is that neither company defines real time in a way that makes me feel comfortable. I get agitated when I hear the word “real” used to describe anything related to digital information. I don’t want to get into eschatology, but there’s a limit to my tolerance for “real”.

What’s real about big traditional database and IBM-inspired systems is that getting updates is tough. Even more problematic is the difference between processing data related to events or activities and information activities. Large systems have a tough time handling real time because latency is a fact. The bigger and clunkier the system, the more latency. Gmail went south for some users last week, and the users identified the flaw due to latency. What really happened is probably unknown to most Googlers except for the team that tracked down the problem and resolved it. But the level of service restored probably has latency, just brief enough latency to allow the user to perceive that the system was working in what the user perceived as real time.

In my SLA lecture, I want to offer for comment and criticism a taxonomy of real time. My thought is that once the different types of real time have been described, it will be easier to evaluate what type of “real time” a particular system is delivering.

The broad phrase “real time” is essentially meaningless without definition. And the cost of decreasing latency in an SAP or Oracle system is one of the factors fueling interest in alternatives to the the methods embedded in traditional enterprise information frameworks.

Real time for some traditional systems means “acceptably slow.” Most users are unable to determine the freshness of a data point even when there is an explicit time stamp. I enjoy real time traffiic displays on my auto’s GPS system. When I arrive, the jam is gone. Real time?

Real time. Easy to say. Essentially meaningless in my opinion.

Stephen E Arnold, June 4, 2010

Freebie in real time.

Comments

2 Responses to “SAP and Oracle Chase Real Time”

  1. Dan Graham on June 4th, 2010 11:11 am

    Stephen,
    SAP did not invent “in-memory” concepts –they simply see it as a great solution to some performance problems. TimesTen was the first successful startup to realize the need for an in-memory capability. Oracle bought them in 2005. An in-memory solution must have data or results cached in main memory AND be able to quickly save or restore a snapshot of memory from disk. Since the early days of TimesTen, many vendors have added a similar recoverable cache to their middle tier application or database engine. For data warehouses, it is usually not cost effective to cache 2-4 terabytes in memory. Disks are still cheaper than memory, and the user’s demand for data far exceeds Moore’s Law.

    Data warehouse queries will often flush all available memory whenever a table scan or large SQL join is done. Consequently, products like TIBCO Spotfire save the aggregated results of a query “in-memory” for visualizations that run blazingly super fast. But when it comes time to drill down into the tree maps or graphs, Spotfire often has to send a SQL query to the Teradata Database for details. So in-memory does its best work as the front end to a data warehouse, aka a BI Tool.

    A vast majority of column-oriented benefits come from compression. Many database vendors offered compression algorithms with 10X-100X speed up long before Vertica or Sybase IQ made it a marketecture debate. Nevertheless, there are a few specific data domains where columnar has some advantages. Columnar is good stuff, but it’s not an iPod wiping out that came before. Its just-one-of-many performance acceleration techniques.

    We agree that “real time” means everything and nothing. Our Active Data Warehouse delivers 1 second response time to front line users as well as 1 hour complex queries for data miners, and everything in between concurrently. We avoid using the term real-time — it causes weird expectations and occasional panic.

    See the video from William McKnight on in-memory databases at:
    http://www.b-eye-network.com/watch/12111.

    Daniel.Graham@Teradata.com
    Active Data Warehouse Program Director
    Teradata Corporation

  2. Stephen E. Arnold on June 4th, 2010 3:52 pm

    Dan Graham,

    Thanks for correcting me on this point. I appreciate your comment about real time as well.

    Stephen E Arnold, June 4, 2010

  • Archives

  • Recent Posts

  • Meta