eBay Analysis: Overlooking the Ling Temco Vought Case

February 17, 2009

Not long ago, I wrote about a trophy generation whiz kid who analyzed company and product failure. I pointed out that a seminal study shed light on the “new” developments the whiz kid was pontificating. The whiz kid pulled the post. I just read a post in a Web log that I find interesting most of the time. The author of “Why eBay Should Consider Breaking Itself Up” is Kevin Kelleher. You can read the story here. The premise of the story is one with which I agree. eBay has collected a number of eCommerce companies and failed to make any of them generate sufficient revenue to generate enough revenue to feed the cost maw of the parent. What made me shake my tail feathers was:

  1. The use cases or case studies of the Ling Temco Vought style roll ups do not warrant a mention in the article. Most business school students or investors burned when LTV went south.
  2. The failure to integrate acquisitions is not a problem exclusive to eBay. Mr. Kelleher left me with the impression that eBay was a singleton. It is not. Other companies with a similar gaggle of entities and similar cost control problems include AOL, IAC (mentioned by Mr. Kelleher), Microsoft, Yahoo. A key point is that the overhead associated with LTV style operations increases over time. Therefore, instead of getting better numbers, the LTV style roll up generates worsening numbers. In a lousy economy, the decline is accelerated and may not be reversible. There’s no sell off because time is running out on eBay and possibly on a couple of the other companies I mentioned.
  3. The impact of LTV type failures destabilizes other businesses in the ecosystem. I never liked the Vietnam era “domino theory”, but I think the possibility of a sequence of failures increases in the LTV type of failure.

You should read the story because the information about eBay is useful. If you have an MBA, you probably have LTV data at your fingertips. Historical context for me is important. Maybe next time?

Stephen Arnold, February 17, 2009

Bad News for the Business Information Crowd

February 17, 2009

The juggernaut of business information has slowed down. The upward growth of revenues for data priced at a premium for analysts, wheeler dealers, and the banking crowd has stalled. Two companies bet heavily that the party would continue past curfew. The bartender, however, shut down early and seems to be working half days. You can read Jay Yarrow’s good summary “Bloomberg and Reuters Clobbered” here. I liked his lead: “This one might need to be filed in the Duh category, but according to analysis by Douglas Taylor, the managing director of consulting group, Burton-Taylor, Bloomberg and Reuters are facing steep revenue losses for 2009.”

I agree.

For those who keep up with this Beyond Search Web log, you will find the story about a company that appears to have an arm’s length relationship with Thomson Reuters germane. Click here to read about the debt burden of Cengage, the former educational and professional publishing unit of Thomson Reuters’ company. If the financial house of cards begins to collapse, it is interesting to think of the impact the shock waves will have. Washington’s appetite to bail out companies may become satiated. The Wall Street bonus payouts and GM’s ultimatum of bankruptcy or more cash are burrs under some folks’ saddles. Now peripheral companies may be sucked into a whirlpool of red ink with no life preserver available.

Stephen Arnold, Februry 17, 2009

Twitter in the Enterprise

February 17, 2009

Real time search is useful. Twitter in the enterprise is an interesting idea. If you are gung ho to marry your organization and real time information, you will want to read “Twitter Is Now a Must in the Enterprise” by Jason Meserve here. This is a three part article. Twitter can do no wrong. Twitter spam is not an issue. Twitter is not prone to crashes. These points are not in Mr. Meserve’s rah rah write up. I don’t want to dwell on the shortcomings of the article because it presents a number of useful examples and new data; for instance:

According to a Network World survey of 583 IT execs, 84% said they visit social networking sites on a regular basis, up from 68% last year. In fact, half of our respondents said they visit a social networking site at least several times a week. Only 29% said they visit social networking sites solely for entertainment purposes, and 64% said they are using social networks more than they did a year ago.

In addition, the word security is mentioned, which is unusual for most social networking articles.

Real time search is a big deal for competitive and other types of intelligence professionals. If you haven’t fiddled with Twitter search, navigate to http://search.twitter.com. The performance can be sluggish at times. Try to search for your name or the name of your employer. Try a code word. With some experimentation, you will find some interesting items.

I think it is wonderful if other people use Twitter. The spam problem is a harbinger of other network excitement as well. In some organizations, Twitter might be a bit problematic at eDiscovery time. Oh, the cheerleading will have to fit in some important words soon. I am waiting for spam, monitoring for compliance purposes, and uptime.

Stephen Arnold, February 17, 2009

Google Now May Have a Saviour in the UK

February 17, 2009

One of the dead tree crowd in the UK–The Daily Telegraph–rustled my pinfeathers with “Google’s UK Chief Matt Brittin Could Prove a Saviour.” I thought Google UK had a different chief, but I guess I am behind the times. The former chief–an Oracle escapee–must not have been a chief. Rupert Neate set me straight on that matter here. The article makes much of the 40-year-old Googler as a person concerned with the fate of dead tree publications like The Daily Telegraph. Mr. Neate tells me that the “6ft 3in tall” Mr. Brittin is a bronze medal rower (1998 World Rowing Championship). Mr. Brittin also “joined Google last January after three years at Trinity Mirror, as the person who made the “greatest individual contribution to new media” in 2008.”

Now the meat of the story–assuming the résumé of a Googler is not the point of the article–seemed to be summed up in this quote:

“Many publishers are partners of Google and we work together by providing targeted advertising to their websites so they can make money out of dead space,” he says. “In the last three months of last year we gave away $1.4bn (£970m) of revenue to publishing partners for adverts on their sites. All we are trying to do is help traditional media in a new environment.”

There are some other interesting items about creativity, Google’s DNA, and mobile telephony. But the point of this article is that Google is the publishers’ pal.

My research suggests that Google is a profit-making enterprise learning that its products and services can ignite strong reaction, costly litigation, and embarrassing public squabbles with people who don’t win rowing championships and work at Google.

The Google is an information company. It has developed an end-to-end platform. Users are just beginning to get a sense of what the platform can do. Google itself has seemed reluctant to identify some of the Google infrastructure’s key functions. No surprise. With journalists who wax poetic over a rowing championship, why fiddle with the rosy tint illuminating the Google.

I still don’t get the “saviour” part. Traditional media are behind in a rowing competition of sorts. Not even Google’s Mr. Brittin can pull a pal’s oar hard enough to reverse the accelerating decline. If push comes to shove, Mr. Brittin and the other Googlers, in my opinion, will save themselves, not traditional media. Shareholders and regulators expect nothing less. To ignore fiduciary responsibility creates serious problems for Google.

For more of Google “saving” the newspaper business, read this Valley Wag exclusive here. Lots of Google publishing activity methinks.

Stephen Arnold, February 17, 2009

What Is Vint Cerf Saying

February 16, 2009

Lidija Davis’s “Vint Cerf: Despite Its Age, the Internet Is Still Filled with Problems” does a good job of providing an overview of Vint Cerf’s view of the Internet. You can read the article here. Mr. Davis provides a snapshot of the issues that must be addressed if she captured the Google evangelist’s thoughts accurately:

According to Cerf, and many others, inter-cloud communication issues such as formats and protocols, as well as inter or intra-cloud security need to be addressed urgently.

I found the comments about bit rot interesting and highly suggestive. She quite rightly points out that her summary presents only a small segment of the talk.

When I read her pretty good write up, I had one thought: “Google wants to become the Internet.” If the company pulls off this grand slam play, then the issues identified by Evangelist Cerf can be addressed in a more forthright manner. My reading of the Guha patent documents, filed in February 2007, reveals some of the steps Google’s programmable search engine includes to tackle the problems. Mr. Cerf identified and Ms. Davis reported. I find the GoogleNet an interesting idea to ponder. With some content pulled from Google caches and the Google CDN (content delivery network), Google may be the appropriate intermediary and enforcer in this increasingly unstable “space”.

Stephen Arnold, February 16, 2009

Another Google Glitch

February 16, 2009

More technical woes befuddle the wizards at Google. According to SERoundTable’s article “Google AdSense and AdWords Reportings Takes a Weekend Break” [sic] here, these systems analytic reports did not work. I wonder if Googzilla took a rest on Valentine’s Day?  The story provides a link to Google’s “good news” explanation of the problem in AdWords help. SERoundTable.com provides links to the various “discussions” and “conversations” about this issue. This addled goose sees these as “complaints” and “snarls”, but that’s the goose’s refusal to use the lingo of the entitlement generation.

Call it what you will. The GOOG has been showing technical missteps with what the goose sees as increasing frequency. The Google plumbing reached state of the art in the 1998 to 2004 period. Now the question is can the plumbing and the layers of software piled on top of Chubby and the rest of the gang handle the challenges of Facebook.com and Twitter.com? Google knows what to do to counter these real time search challengers. The question is, “Will its software system and services allow Googzilla to deal with these threats in an increasingly important search sector?” I am on the fence because of these technical walkabouts in mission critical systems like AdSense and AdWords. Who would have thought that the GOOG couldn’t keep its money machine up and running on Cupid’s day? Is there a lack of technical love in Mountain View due to other interests?

Stephen Arnold, February 16, 2009

Mysteries of Online 6: Revenue Sharing

February 16, 2009

This is a short article. I was finishing the revisions to my monetization chapter in Google: The Digital Gutenberg and ran across notes I made in 1996, the year in which I wrote several articles about online for Online Magazine. One of the articles won the best paper award, so if you are familiar with commercial databases, you can track down this loosely coupled series in the LITA reference file or other Dialog databases.

Terms Used in this Write Up

database A file of electronic information in a format specified by the online vendor; for example Dialog Format A or EBCIDIC
database producer An organization that creates a machine-readable file designed to run on a commercial online service
online revenue Cash paid to a database producer generated when a user connected to an online database and displayed online or output the results of a search to a file or a hard copy
online vendor A commercial enterprise that operated a time sharing service, search system, and customer support service on a fee basis; that is, annual subscription, online connect charge, online type or print charge
publisher An organization engaged in creating content by collecting submissions or paying authors to create original articles, reports, tables, and news
revenue Money paid by an organization or a user to access an online vendor’s system and then connect and access the content in a specific database; for example, Dialog File 15 ABI/INFORM

My “mysteries” series has evoked some comments, mostly uninformed. The number of people who started working in search when IBM STAIRS was the core tool are dwindling in number. The people who cut their teeth in the granite choked world of commercial online comprise an even smaller group. Commercial online began with US government funding in the early 1960s, so Ruby loving script kiddies are blissfully ignorant of how online files were built and then indexed. No matter. The lessons form foundation stones in today’s online world.

Indexing and Abstracting: A Backwater

Aggregators collect content from many different sources. In the early days of online, this meant peer reviewed articles. Then the net gathered magazines and non-peer reviewed publications like trade association magazines. Indexing and abstracting in the mid 1960s was a backwater because few publishers knew much about online. Permission to index and abstract was often not required and when a publisher wanted to know why an outfit was indexing and abstracting a publication, the answer was easy. “We are creating a library reference book.” Most publishers cooperated, often providing some of the indexing and abstracting outfits with multiple copies of their publications.

Some of the indexing and abstracting was very difficult; for example, legal, engineering, and medical information posed special problems. The vocabulary used in the documents was specialized, and word lists with Use For and See Also references were essential to indexing and abstracting. The abstract might define a term or an acronym when it referenced certain concepts. When abstracts were included with a journal article, the outfit doing the indexing and abstracting would often ask the publisher if it was okay to include that abstract in the bibliographic record. For decades publishers cooperated.

The reason was that publishers and indexing and abstracting outfits were mutually reinforcing operations. The published collected money from subscribers, members, and in some cases advertisers. The abstracting and indexing shops earned money by creating print and electronic reference materials. In order to “read the full text”, the researcher had to have access to a hard copy of the source document or, in some cases, a microfilm instance of the document.

No money was exchanged in most cases. I think there was trust among publishers and indexing and abstracting outfits. Some of the people engaged in indexing and abstracting crated products so important to certain disciplines that courses were taught in universities worldwide to teach budding scientists and researchers how to “find” and “use” indexes, abstracts, and source documents. Examples include the Chemical Abstracts database, Beilstein, and ABI/INFORM, the database with which I was associated for many years.

Pay to Process Content

By 1982, some publishers were aware that abstracting and indexing outfits were becoming important revenue generators in their own right. Libraries were interested in online, first in catalogs for their patrons, and then in licensing certain content directly from the abstracting and indexing shops. The reason for this interest from libraries (medical, technical, university, public, etc.) was that the technology to ingest a digital file (originally on tape) was becoming available. Second, the cost of using commercial online services which would make hundreds of individual abstract and index databases available was variable. The library (academic or corporate) would obtain a password and a license. Each database incurred a charge, usually billed either by the minute or per query. Then there was online connect charges imposed by outfits like Tymnet or other services. And there were even charges for line returns on the original Lexis system. Libraries had limited budgets, so it made sense for some libraries to cut the variable costs by loading databases on a local system.

By 1985, full text became more attractive to users. The reason was that A&I (abstracting and indexing) services provided pointers. The user then had to go find and read the source document. The convenience of having the bibliographic information and the full text online was obvious to anyone who performed research in anything other than a casual, indifferent manner. The notion of disintermediation expanded first in the A&I field because with full text, why pay to crate a formal bibliographic record and manually assign index terms. The future was full text because systems could provide pointers to documents. Then the document of interest to the researcher could be saved to a file, displayed on screen, or printed for later reference.

The shift from the once innovative A&I business to the full text approach threw a wrench into the traditional reference business. Publishers were suspicious and then fearful that if the full text of their articles were in online systems, subscription revenues would fall. The publishers did not know how much risk these systems poses, but some publishers like Crain’s Chicago Business wanted an upfront payment to permit my organization to crate full text versions of certain articles in the Crain publications. The fees were often in the five figure range and had additional contractual obligations attached. Some of these original constraints may still be in operation.

image

Negotiating an online deal is similar to haggling to buy a sheep in an open market. The authors were often included among the sheep in the traditional marketplace for information. Source: http://upload.wikimedia.org/wikipedia/commons/thumb/0/0e/Haggling_for_sheep.jpg/800px-Haggling_for_sheep.jpg

Revenue Sharing

Online vendors like Dialog Information Services knew that change was in the air. Some vendors like Dialog and LexisNexis moved to disintermediate the A&I companies. Publishers jockeyed to secure premium deals for their full text material. One deal which still resonates at LexixNexis today was the New York Times’s arrangement with LexisNexis for the New York Times’s content. At its height, the rumor was that LexisNexis paid more than $1 million for the exclusive that put the New York Times’s content in the LexisNexis services. The New York Times decided that it could do better by starting its own online system. Because publishers saw only part of the online puzzle, the New York Times’s decision was a fateful one which has hobbled the company to the present day. The New York Times did not understand the cost of the infrastructure and the importance of habituated users who respond to the magnetism of an aggregate service. Pull out a chunk of content, even the New York Times’s content, and what you get is a very expensive service with insufficient traffic to pay the overall cost of the online operation. Publishers making this same mistake include Dow Jones, the Financial Times, and others. The publishers will bristle at my assertion that their online businesses are doomed to be second string players, but look at where the money is today. I rest my case.

To stay in business, online players cooked up the notion of revenue sharing. There were a number of variations of this business model. The deal was rarely 50 – 50 for the simple reason that as contention and distrust grew among the vendors, the database companies, and the publishers, knowledge of costs was very difficult to get. Without an understanding of costs in online, most organizations are doomed to paddling upstream in a creek that runs red ink. The LexisNexis service may never be able to work off the debt that hangs over the company from its money sucking operations that date from the day the New York Times broke off to go on its own. Dow Jones may never be able to pay off the costs of the original Dow Jones online service which ran on the mainframe BRS search system and then the expensive joint venture with Reuters that is now a unit in Dow Jones called Factiva. Ziff Communications made online pay with its private label CompuServer service and its savvy investments in high margin database and operations that did business as Information Access. Characteristic of Ziff’s acumen, the Ziff organization exited the online database business in the early 1990s and sold off the magazine properties, leaving the Ziff group with another fortune in the midst of the tragedy of Mr. Ziff’s health problems. Other publishers weren’t so prescient.

With knowledge in short supply, here were the principal models used for revenue sharing:

Tactic A: Pool and Payout Based on Percentage of Content from Individual Publishers

This was a simple way to compensate publishers. The aggregator would collect revenues. The aggregator would scrape off an amount to cover various costs. The remainder would then be divided among the content providers based on the amount of content each provider contributed. To keep the model simple (it wasn’t) think of a gross online revenue of $110. Take off $10 for overhead (the actual figure was variable and much larger). The remainder is $100. One publisher provided 60 percent of the content in the pay period. Another publisher provided 40 percent  of the content in the pay period. One publisher got a check for $60 and the other a check for $40. The pool approach guarantees that most publishers get some money. It also makes it difficult to explain to a publisher how a particular dollar amount was calculated. Publishers who turned an MBA loose on these deals would usually feel that their “valuable” content was getting short changed. It wasn’t. The fact is that disconnected articles are worth less in a large online file than a collection of articles in a branded traditional magazine. But most publishers and authors today don’t understand this simple fact of the value of an individual item within a very large collection.

I was fascinated when smart publishers would pull out of online services and then  try to create their own stand alone online services without understanding the economic forces of online. These forces operate today and few understand them after more than  40 years of use cases.

Read more

Truescoop: Social Search with a Twist

February 16, 2009

As social media keeps expanding, privacy and security is in the spotlight, especially on sites like MySpace and Facebook, where you can list home address, birthdays, phone numbers, e-mails, family connections, and post pictures and life details. That information is then available to anyone. Every time you access an application on Facebook, you have to click through a warning screen that tells you that app will be gathering your personal information. And now there’s Truescoop, http://www.truescoop.com, a Facebook tool at http://apps.facebook.com/truescoop/ specifically designed to target that personal information. TrueScoop’s database of millions of records and photos is meant to help people discover personal and criminal histories. So if you find a date on Facebook, you can check them out first, right? Whoa. We all know that there are issues with the Internet and personal privacy, but how much is going too far? Although Truescoop says its service is confidential, the info isn’t – TrueScoop also allows for users to share discoveries with others and comment on someone’s personal profile. Time to be more cautious. Consider what information you post on your blogs and sites carefully. You don’t want some other goose to steal your golden egg.

Jessica W. Bratcher, February 16, 2009

Google and Torrents: Flashing Yellow Lights

February 15, 2009

Ernesto, writing in Torrent Freak here, may be the first flashing yellow signal for Google’s custom search service. You can learn about the CSE here. The article that caught my attention as I was recycling some old information for part six of my mysteries of online opinion series was “uTorrent Adds Google Powered Torrent Search.” If you don’t know what a torrent is, ask your child or your local hacker. uTorrent is now using “a Google powered torrent search engine”. Ernesto said:

While the added search is not a particular good way to find torrents, its addition to the site is an interesting move by BitTorrent Inc. Not so long ago, uTorrent removed the search boxes to sites like Mininova and isoHunt from their client, as per requests from copyright holders. However, since BitTorrent Inc. closed its video store, there is now no need to please Hollywood and they are free to link to torrent sites again.

With more attention clumping to pirated software and digital content, Ernesto’s article might become the beacon that attracts legal eagles, regulators, and folks looking to get something for nothing. I will keep my eye open for other Google assertions. Until I get more information, I want to remind you that I am flagging an article by another person. I am not verifying Ernesto’s point. The story could be the flashing signal or a dead bulb. I find it interesting either way. Google’s index has many uses; for example, looking for the terms hack, password, confidential, etc.

Stephen Arnold, February 15, 2009

SEO: Costly, Noisy, and Uninteresting

February 15, 2009

I enjoy reading the comments from aggrieved search engine optimization wizards. I know SEO is a big business. I met a fellow in San Jose who boasted of charging “clueless clients” more than $5,000 a month for adding a page label and metatags to a Web page. Great work if you want to do it. I don’t. I gave talks for a couple of years at various SEO conferences. I grew tired of 20 somethings coming up to me and asking, “How do I get a high Google ranking?” The answer is simple, “Follow Google’s guidelines and write information that is substantive.” Google makes the rules of its information toll road pretty clear. Look here, for example. Google even employs some mostly socially acceptable engineers to baby sit the SEO mavens.

I am not alone in taking a dim view of SEO. I have spoken with several of the Beyond Search goslings about methods to fool Mother Google. These range from ripping off content from other sources in violation of copyright to loading up pages with crapola that Google’s indexing system interprets as “content.” Here’s an article that I keep in my ready file when I get asked about SEO. I love the title: “Make $200K+ a Year Running the SEO Scam.” I also point to an SEO “expert’s” own tips to help avoid the most egregious scam artists. You can read this checklist from AnyWired.com here. Finally, navigate here and look at the message in words and pictures. The message is pretty clear. Pay for rankings whether the content is information, disinformation, good, bad, or indifferent.

My suggestion is take a writing class and then audit a course in indexing at an accredited university offering a degree in library science. Oh, too much work. Too bad for me because I have to wade through false drops in public Web search engines. SEO is contributing to information problems, not solving them.

In Washington, DC, a few days ago, I heard this comment, “We have to get our agency to appear higher in the Google rankings. The House finance committee uses Google results to determine who is doing a good job.” Great. Now the Federal Web sites, which are often choked with data, will be doing SEO to reach elected officials. Wonderful.

SEO is like kudzu. I’m glad I confine my SEO activities to recommending that sites use clean code, follow Google’s rules, include content that is substantive, and update information frequently. I leave the rest to the trophy generation carpetbaggers.

Stephen Arnold, February 15, 2009

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta