Netflix Jumps to Amazon

January 2, 2010

Want to enrage a giant, Oracular bull?

Bad news for Oracle, IBM as reported by Computerworld.com: Netflix is transferring its datacenter from Oracle on IBM hardware to Amazon Web Services’ (AWS) Elastic Compute Cloud (EC2) in an effort to save capital.  The switch comes as Netflix’s customer count is headed through the roof, and thus the cost and un-reliability of maintaining or expanding the existing data centers is becoming too great a burden.

Netflix was already patronizing AWS for other less critical applications like customer interfacing and even announced last May its intention to expand this relationship.  They weren’t kidding around.  The decision is prompted by three major cost points.  First, Oracle on IBM is inherently “very expensive”.  Second, it would have required long hours and great effort for Netflix to build their own data center when systems are added to AWS’s cloud with ease.  And finally, “EC2’s pay-as-you-go model means costs are elastic,” so no more paying for unused resources stranded on service contract.

Besides those direct cost reductions, this transition will free up other engineering resources required to baby-sit the existing infrastructure to be re-tasked in other areas.

Netflix makes some compelling arguments here; it doesn’t take long for the dominoes to fall.  Wonder if other companies will realize the same thing and follow suit.  It would be prudent for Oracle, IBM to investigate what upgrade options exist to be more competitive with AWS and to prevent further customer turnover.

Sarah Rogers, January 2, 2011

Freebie

A Google Cheerleader Gently Disses MSFT

December 31, 2009

Short honk: A few years ago, I had difficulty finding examples of Google technology “in the wild.” In fact, I telephoned a Google reseller to ask a question. The reseller would not speak with me until the reseller coordinated with Google. I can’t reveal the details of why I called, but let us say that the call was not an unfriendly one.

Flash forward to December 22, 2009, and the blog post by a boss / janitor: “How Google and the Cloud Changed My Company.” The write up has plenty of gory details: executive resistance at the idea of using Google Apps. The best part was this comment:

Oh, did I mention the price? I estimate we will have saved almost $1,000 per employee between hardware and software costs — not to mention the deployment and maintenance savings that we reap over time. Woah. I just took a moment to re-read what I have written. Sounds like I work for Google. I don’t. But this blog is about what works for business and I feel that Google made a bold move to make businesses work better. I actually am not a Microsoft Hater anymore. Outgrew that when I put away the code. I just think they are an old and overpriced model. It will be interesting to see how good their response to Google Docs is: Office Web Apps. I bet MSFT isn’t used to playing catch-up on one of their core businesses!

How times have changed. Google’s burgeoning PR team could not have crafted a better testimonial. Oh, I found this using Google Blogsearch too. Indexed right smartly as well.

Stephen E. Arnold, December 30, 2009

I wish to report to the National Institutes of Health that Google’s grassroots PR is doing fine, thank you. And, because I am not PhD, I was not paid for my write up, attention, or scrutiny of the patient.

Cloud Performance

December 5, 2009

After the endnote session at the International Online Show, Charlie Hull, Lemur Consulting, and I were talking about various aspects of open source technology. Mr. Hull has a positive view of open source, and I try to be disputatious whenever possible. Since Mr. Hull purchased my hot chocolate (small hot chocolate, in point of fact), I pushed back a bit. I focused on the issue of performance of certain open source software. Committee built gizmos may lack the trim tummies found in some commercial software solutions. I recalled seeing a performance comparison of some open source and commercial cloud solutions, and I said that I would dig up the article and post a comment.

The write up was “VPS Performance Comparison” in the Journal of Eivind Uggedal. You can see what fun it is to have a hot chocolate with a Lemur and a goose! The guts of this quite interesting piece of research are spilled in several charts. The systems put through their paces via scripts and some test data included:

The results, thoughtfully accompanies by some useful fee metrics, were interesting. The data revealed that both Mr. Hull (the modest lemur) and I (the addled goose) were both correct. He and I also like Banksy, the street artist, and we have several other areas of agreement in common as well. Quite depressing I might add.

I want to urge you to read Mr. Uggedal’s essay, so I will point out one chart that I found illuminating:

chart

I know the lines are difficult to see but the point is that Amazon is predictable if pokey. Several vendors consistently lag others and the top performers in this test which is close to a Web application’s load are zipping right along. The speediest, are summed up by Mr. Uggedal this way:

Linode. 32-bit gave the best results on the Unixbench runs while 64-bit was fastest on the Django and database tests.

Quite a nice piece of work. Lemurs and geese agree that analyses like Mr. Uggedal can shed light on certain technical issues. Nevertheless, I assert that a wet goose is more sleek than the average dry lemur.

Stephen Arnold, December 5, 2009

Oyez, oyez, I wish to disclose that I wrote this essay and referenced Lemur Consulting because I was paid off with a cup of hot chocolate. Small cup, mind you. To whom do I report this commercial transaction. I think the US Federal Aviation Administration has jurisdiction over addled geese. Must comply, of course.

Cloud Math from Merrill Lynch

November 27, 2009

I read “Merrill Lynch: Cloud Computing Market Will Reach $160 Billion…Really?”. Crazy forecasts once were the core competency of 20 year old consultants at down market research firms and crazed MBAs looking for a big win. In my experience, the wacky stuff from the research folks at large, diversified investment firms have some math and some data to ingest to obtain maximum spreadsheet fever. I followed the links in the ReadWriteWeb.com article and reached three conclusions for myself, not you, gentle reader.

First, any thought that the financial meltdown trimmed the sales of the wacky MBAs is out the window. The forecast for cloud computing to hit $160 billion in 13 months is crazier than anything this addled goose has been able to concoct in recent memory. The data available are fuzzy, but I suppose it is possible to did through a college math book, find a method, and stuff in variables until a magic number plops out like an egg from a steroid stuffed squab. Will government oversight address wacky speculation about market size? Government what?

Second, one would think that companies engaged in cloud computing would might offer some anchor points. Last time I checked, none of thee outfits defines what cloud computing is in the hopes of making their products and services part of the next big thing—no matter what it turns out to be. I think the vendors throw gasoline on the fires of greed that burn in the analysts’ empty inner furnaces. They are, in my view, “hollow men” and hollow women.

Third, what about customers? Do customers know whether a particular computer is doing something in the machine itself or somewhere else? The customer who reads the number $160 billion is likely to ask, “So what?” Yes, exactly. So what. As devices connect and run local software, the notion of dividing the elements of computing into distinct components means zero to the user.

To sum up, Merrill Lynch’s $160 billion number for 2011 says to me, “We’re back. Let’s pump those stocks, baby. Churn is good.”

Stephen Arnold, November 27, 2009

I wish to disclose to the Securities & Exchange Commission, a top notch watchdog if there ever was one, that I was not paid to point out that wacky MBAs are making up numbers. Just like in the good old days. Bernie Madoff is probably doing some rough numbers right now.

Google and Its Desired Repositories

November 21, 2009

I find “desired repositories” quite enticing. I was going to call this write up “A Repository Named Desire” but I was fearful that some lawyer responsible for the Tennessee Williams’ play would object. Most of the Sergey-and-Larry-eat-pizza Google pundits follow the red herrings dragged by the Googlers toward the end of each week. Not me. I pretty much ignore the Google public statements because those have a surreal quality for me. The messages seem oddly disconnected from what Google’s deep thinkers are * actually doing *. When Google does a webinar, it is too late for the competitors to do much more than go to their health club and work off their frustrations.

desired repository

That looks simple. From US20090287664. Notice that the types of repositories are extensible.

If you want to see some of the fine tuning underway with the Google plumbing, take a peek at 20090287664, Determination of a Desired Repository. This is a continuation of a 2005(!) invention in case you thought the method looked familiar. You can find the write up at your favorite US government Web site, the USPTO. (Don’t you just love that search interface. Someone told me that the search engine was from OpenText, and I am trying to verify that statement.)

Here’s what caught my attention:

A system receives a search query from a user and searches a group of repositories, based on the search query, to identify, for each of the repositories, a set of search results. The system also identifies one of the repositories based on a likelihood that the user desires information from the identified repository and presents the set of search results associated with the identified repository.

Seems obvious, right? Now think of this at Google scale. Different problem? It is in my book. What has the Google accomplished? Just one claim. Desired repositories at Google scale.

Stephen Arnold, November 21, 2009

Again, I want to report to the USPTO that I was not paid to write yet another cryptic comment about a Google plumbing invention.

Microsoft and the Cloud Burger: Have It Your Way

November 19, 2009

I am in lovely and organized Washington, DC, courtesy of MarKLogic. The MarkLogic events pull hundreds of people, so I go where the action is. Some of the search experts are at a search centric show, but search is a bit yesterday in my opinion. There’s a different content processing future and I want to be prowling that busy boulevard, not sitting alone on a bench in the autumn of a market sector.

The MarkLogic folks wanted me to poke my nose into its user meeting. That was a good experience. And now I am cooling my heels for a Beltway Bandit client. I have my watch and my wallet. With peace of mind, I thought I would catch up on my newsreader goodies.

I read with some surprise “Windows Server’s Plan to Move Customers Back Off the Cloud” in beta news. As I understand the news story, Microsoft wants its customers to use the cloud, the Azure service. Then when fancy strikes, the customer can license on premises software and populate big, hot, expensive to maintain servers in the licensee’s own data center. I find the “have it your own way” appealing. I was under the impression that the future was the cloud. If I understand this write up, the cloud is not really the future. The “future” is the approach to computing that has been here since I took my first computer programming class in 1963 or so.

I found this passage in the article interesting:

If you write your code for Windows Server AppFabric, it should run on Windows Azure,” said Ottaway, referring to the new mix-and-match composite applications system for the IIS platform. “What we are delivering in 2010 is a CTP [community technology preview] of AppFabric, called Windows Azure AppFabric, where you should be able to take the exact same code that you wrote for Windows Server AppFabric, and with zero or minimal refactoring, be able to put it up on Windows Azure and run it.” AppFabric for now appears to include a methodology for customers to rapidly deploy applications and services based on common components. But for many of these components, there will be analogs between the on-Earth and off-Earth versions, if you will, such that all or part of these apps may be translated between locales as necessary.

Note the “shoulds”. Also, there’s a “may be”. Great. What does this “have it your own way” mean for enterprise search?

First, I don’t think that the Fast ESP system is going to be as adept as either Blossom, Exalead, or Google at indexing and serving results from the cloud for enterprise customers. The leader in this segment is not Google. I would give the nod to Blossom and Exalead. There’s no “should” with these systems. Both deliver.

Second, the latency for a hybrid application when processing content is going to be an interesting challenge for those brave enough to tackle the job. I recall some issues with other vendors’ hybrid systems. In fact, these performance problems were among the reasons that these vendors are not exactly thriving today. Sorry, I cannot mention names. Use your imagination or sift through the articles I have written about long gone vendors.

Third, Microsoft is working from established code bases and added layers—wrappers, in my opinion—to these chunks of code that exist. That’s an issue for me because weird stuff can happen. Yesterday one Internet service provider told me that his shop was sticking with SQL Server 2000. “We have it under control”, he said. With new layers of code, I am not convinced that those building a cloud and on premises solution using SharePoint 2010 and the “new” Fast ESP search system are going to have stress free days.

In short, more Microsoft marketing messages sound like IBM’s marketing messages. Come to think of it hamburger chains have a similar problem. I think this play is jargon for finding ways to maximize revenues, not efficiencies for customers. When I go to a fast food chain, no matter what I order, the stuff tastes the same and delivers the same health benefits. And there’s a “solution accelerator.” I will have pickles with that. Just my opinion.

Stephen Arnold, November 19, 2009

I hereby disclose to the Internal Revenue Service and the Food and Drug Administration that this missive was written whilst waiting for a client to summon me to talk about topics unrelated to this post. This means that the write up is a gift. Report it as such on your tax report and watch your diet.

Google and Speed, Which Kills

November 16, 2009

Google’s focus on speed is one of those isolated Google dots that invite connection with other Google dots. Connecting the dots is easy when you are in grade school. The dots are big and the images used in grade school have parts filled in to help the easily bored student. Check out the image from Natural Environment Club for Kids. Looks like a flower and a bee, doesn’t it?

image

Connecting Google dots is a bit more complicated. The Google dots look more like this type of puzzle:

image

So where does speed fit into the Google dots? You will want to read “Google: Page Speed May Become a Ranking Factor in 2010: Algorithm Change Would Make Slow Sites Rank Lower”. Chris Crum wrote:

Google has generally been pretty good at providing webmasters with tools they can use to help optimize their sites and potentially boost rankings and conversions. Google recently announced a Site Speed site, which provides webmasters with even more resources specifically aimed at speeding up their pages. Some of these, such as Page Speed and Closure tools come from Google itself. But there are a number of tools Google points you to from other developers as well.  If you’re serious about wanting your site to perform better in search engines, and you haven’t given much thought to load times and such, it’s time to readjust your way of thinking. Caffeine increases the speed at which Google can index content. Wouldn’t it make sense if your site helped the process along?

No push back on this from me. Let me shift the discussion from a dot connected to PageRank to a dot that has a sharper angle.

Speed is a big deal. Google itself wants stuff to run quickly. However, in my research speed is * the * Achilles’ heel for its principal competitors in Web search and in the enterprise. In fact, speed and scale are the Scylla and Charybdis through which most companies have to navigate. If you have had to figure out how much it costs to scale a system like SharePoint or make Oracle response times improve, you know exactly what the challenges are.

Speed will be a competitive wedge that Google uses to put significant pressure on its competitors’ Atlas major in late 2009 and throughout 2010. When the dots are connected, here’s the image that the competitors Google targets will see when the picture is complete:

image

Speed is a killer for IBM, Microsoft, Oracle, and Yahoo. Speed makes systems fluid. Users may not know an n-space from a horse race, but speed is addictive. Cheap speed is competitive angle that could spell trouble for companies that mock Google’s spending for lots of its dots.

Stephen Arnold, November 15, 2009

I wish to report to the Superfund Basic Research Program that the research upon which these comments rest was funded by some big outfits who have gone out of business in the financial meltdown. This short article is based on recycled material of minimal commercial value. I wonder if I can apply for superfund support?

SAP and Its Pricing: A Sign of Deeper Challenges?

November 15, 2009

SAP is an outfit that provides me with some clues about what will happen to over-large enterprise software vendors. The company grew via acquisition. The company followed IBM’s approach to generating revenue from services. The company made shifts in its services pricing. The company has done just about every trick in the MBA handbook, yet revenues continued to soften. The most recent MBA play at SAP is disclosed in a news report from Reuters called “SAP Plans to Raise Licensing Fees”. The notion of releasing interesting news when most people are eating donuts and thinking about their dwindling retirement accounts is catching on among big companies. Fortunately for us in Harrod’s Creek, Reuters never sleeps. The story revealed:

Germany’s largest software company, SAP AG (SAPG.DE), plans to raise licensing fees for thousands of clients who use older versions of its software, German weekly Wirtschaftswoche reported on Saturday. “SAP’s older customers will be especially affected — that means the most loyal,” Andreas Oczko, deputy head of the German SAP client advocacy group DSAG told the magazine. The magazine said older clients who do not switch to newer versions of software applications or have not switched to a new incremental price structure will see the largest cost changes.

There you go. Upgrade or pay more. Upgrade and pay more for engineering support. That’s the MBA play of the week in my opinion.

What about customers who do nothing? Maybe some of these people will take a close look at their options. In a year, Google will have most of the SAP functionalities latent within the expanding Apps’s ecosystem. Then what? In my opinion, SAP may find that its business challenges have been made more problematic by the Google.

I am eagerly awaiting the unfolding of events in 2010.

Stephen Arnold, November 15, 2009

The Veterans Day Committee has to be aware that this opinion is uncompensated. I might add that canny veterans may want to check out their holdings in SAP to avoid the Wal*Mart greeter syndrome.

Clop Cloppity Clop Clop: The Sound of Google in Education

November 14, 2009

I don’t want to belabor the obvious, but educational publishers may want to keep a close eye on the Google. The firm has been gaining traction in education at an increasingly rapid pace since 2006, the pivotal year in case you have been following my analyses of Google. If you are unaware of the Google as a one stop shop for education, you may want to read “Gone Google at Educause 2009”. A key passage in this write up was in my opinion:

Lots has happened over the past year especially: more than 100 new features have rolled out in Google Apps, we’ve engaged well over six million students and faculty (a 400% increase since this time last year), launched free Google Message Security for K-12 schools and have integrated with other learning services such as Blackboard and Moodle. These developments are just the beginning. According to the newly-released 2009 Campus Computing survey statistics, 44% of colleges and universities have converted to a hosted student email solution, while another 37% are currently evaluating the move. Of those that have migrated, over half — 56% precisely — are going Google.

Course materials? Coming in saddle bags strapped to Googzilla. Clop Cloppity Clop Clop—One of the four horsemen of the Apocalypse heading your way?

Stephen Arnold, November 14, 2009

I wish to report to the Defense Commissary Agency that I was fed one donut at my father’s assisted living facility. However, writing this article and the payment of a small donut are in no way related. The donut was better than the one at got at McDill too.

Hosted Search and Data Center Basics

November 13, 2009

Hosted search is tough enough to sell without dragging the vendor’s data center into the deal. The best hosted services are picky about their data center tie ups. More casual vendors of hosted search are somewhat more casual. If you don’t know about the wild and exciting world of data centers, you will want to read and save “Questions Data Center Operators Don’t Want You to Ask”. The article provides a wealth of useful information. For me, the most interesting segment in the five meaty segments was:

“The SAS70 audit should include all the following sections:
• Security
• Security Company profile
• Key inventories
• Access management
• Badges
• Biometrics
• Staff selection criteria
• Materials control
• Confirmation each security guard has completed a background check
• Security equipment is routinely inspected/tested
• Security “rounds” are recorded and confirmed
• Security camera images and access logs are kept for a minimum 60 days, longer is preferred
• Maintenance/CMMS
• Comprehensive preventive maintenance/testing schedule for ALL mechanical and electrical equipment
• UPS
• Emergency generators
• Rectifiers/DC Plant
• ATS
• Switchgear
• Complete semi-annual (or more frequent) infrared scan
• Breaker audit for NEC compliance (or automated view via current transformers)
• Service level agreements
• Emergency call out for all critical M&E equipment
• Diesel refueling during emergencies or extended operation
• Human Resources
• Staffing process
• Background checks
• Certifications
• Termination management
• Operations
• Recurring training
• Recurring staff meetings
• Business continuity and disaster recovery plans
• Daily site verifications
• Escalation process.”

Useful indeed. Lots more information in the original article.

Stephen Arnold, November 13, 2009

I wish that the author of this nice article would pay men. He did not. I suppose will have to disclose to the Dunlop, Illinois, sheriff that I am working without any money. Maybe I should go back to raising Poland Chinas.

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta