The Fragmentation of Content Analytics

October 29, 2012

I am in the midst of finalizing a series of Search Wizards Speak interviews with founders or chief technology officers of some interesting analytics vendors. Add to this work the briefings I have attended in the last two weeks. Toss in a conference which presented a fruit bowl of advanced technologies which read, understand, parse, count, track, analyze, and predict who will do what next.

Wow.

From a distance, the analytics vendors look the same. Up close, each is individual and often not identical. Pick up the wrong shard and a cut finger or worse may result.

A happy quack to www.thegreenlivingexpert.com

Who would have thought that virtually every company engaged in indexing would morph into next-generation, Euler crazed, and Gauss loving number crunchers. If the names Euler and Gauss do not resonate with you, you are in for tough sledding in 2013. Math speak is the name of the game.

The are three very good reasons for repackaging Vivisimo as a big data and analytics player. I choose Vivisimo because I have used it as an example of IBM’s public relations mastery. The company developed a deduplication feature which was and is, I assume, pretty darned good. Then Vivisimo became a federated search system, nosing into territory staked out by Deep Web Technologies. Finally, when IBM bought Vivisimo for about $20 million, the reason was big data and similarly bright, sparkling marketing lingo. I wanted to mention Hewlett Packard’s recent touting of Autonomy as an analytics vendor or Oracle’s push to make Endeca a business analytics giant. But IBM gets the nod. Heck, it is a $100 billion a year outfit. It can define an acquisition any way it wishes. I am okay with that.

Read more

The Google Search Appliance Adds Bells and Whistles

October 18, 2012

A version of this article appears on the www.citizentekk.com Web site.

The Google Search Appliance is getting along in year. A couple of weeks ago (October 2012), Google announced that Version 7.0 of the Google Search Appliance GB-7007 and the GB-9009 was available. The features of the new system are long-overdue in my opinion. Among the new features are two highly desirable enhancements: better security controls, faceted browsing. But the killer feature, in my opinion, is support of the Google Translate application programming interface.

Microsoft will have to differentiate the now aging SharePoint Search 2013 from a Google Search Appliance. Why? GSA Version 7 can be plugged into a SharePoint environment and the system will, without much or fuss, index the SharePoint content. Plug and play is not what SharePoint Search 2013 delivers. The fast deployment of a GSA remains one of its killer features. Simplicity and ease of use are important. When one adds Google magic, the GSA Version 7 can be another thrust at Microsoft’s enterprise business.

See http://www.bluepoint.net.au/google-search/gsa-product-model

Google has examined competitive search solutions and, in my opinion, made some good decisions. For example, a user may add a comment to a record displayed in a results list. The idea of allowing enterprise users add value to a record was a popular feature of Vivisimo Velocity. But since IBM acquired Vivisimo, that company has trotted down the big data trail.
Endeca has for more than 12 years offered licensees of its systems point-and-click navigation. An Endeca search solution can slash the time it takes for a user to pinpoint content related to a query. Google has made the GSA more Endeca like while retaining the simplified deployment which characterizes an appliance solution.

As I mentioned in the introduction, one of the most compelling features of the Version 7 GSAs is direct support for Google Translate. Organizations increasingly deal with mixed language documents. Product and market research will benefit from Google’s deep support of languages. At last count, Google Translate supported more than 60 languages, excluding Latin and Pig Latin. Now Google is accelerating its language support due to its scale and data sets. Coupled with Google’s smart software, the language feature may be tough for other vendors to match.

Enterprise searchers want to be able to examine a document quickly. To meet this need, Google has implemented in-line document preview. A user can click on a hit and see a rendering of the document without having to launch the native applications. A PDF in a results list appears without waiting the seconds it takes for Adobe Reader or FoxIt to fetch and display the document.

What’s not to like? The GSA GB-7007 and GB-9009 delivers most of the most-wanted features to make content searchable regardless of resource. If a proprietary file type must be indexed, Google provides developers with enough information to get the content into a form which the GSA can process. Failing that, Google partners and third-party vendors can deliver specialized connectors quickly.

Read more

LexisNexis Embraces Open Source

October 2, 2012

LexisNexis has been anchored in commercial, for fee search and retrieval for decades. I fondly recall watching the red Lexis terminals double space each line. I saw the white space as a way to charge for data and special paper. The search system was proprietary. When the company talked about search, most of the information was about how to extract information from the LexisNexis system, not about the details of the systems and subsystems.

I found “How LexisNexis Competes In Hadoop Age” quite fascinating because the story included some high level information along with a bit more detail about LexisNexis’ approach to data management for its risk businesses. The idea is that LexisNexis has quite a bit of data from different sources. These data become the raw material for analyses which allow users of the for fee products and services to assess risk.

The big news in the story is that LexisNexis has developed a high performance computing cluster. The foundation is HPCC. Here’s the key phrase, “an open source platform. Those critics of open source who question the security and stability did not dissuade LexisNexis from an open source path.

image

HPCC charges an annual subscription to the platform software and includes enterprise support.

The other high level items in the story included:

  • The decision to shift to open source was taken over a span of several years
  • The market for a big data platform such as LexisNexis is new. This means that LexisNexis and its corporate parent Reed Elsevier are pushing into uncharted territory.
  • The Hadoop platform is viewed as becoming fragmented. The strategy seems to be to offer an fragmented alternative.

The data management system is HPCC. Some of the details about this platform are:

  • The data transformation methods are implemented via an open source Enterprise Control Language
  • The system includes a social graph (relationship analysis) component
  • HPCC will be easier to use.

The community edition is available via download. There is a fee for the enterprise edition of the system. Modules which extend the basic system are also available. Information about how to buy a subscription is available on the HPCC Web site. There was no pricing information on the HPCC Web site on October 1, 2012. You may want to call HPCC or check the Web site to see if more cost information is posted.

In my opinion, the shift to open source makes sense due to the cost efficiencies the approach can deliver. The technology of open source software can be excellent for operating systems and search. For data management, the proprietary data management vendors assert that systems like Oracle’s database and IBM DB2 offer advantages which many organizations find attractive.

Will LexisNexis embrace open source technology for its commercial search and retrieval service? Perhaps LexisNexis, like Microsoft, is moving more quickly in the open source sector than I know. IBM has been a leader in tapping open source technology within its commercial business model.

As I write this, Reed Elsevier, according to Google Finance, has been able to grow its top line revenue modestly since early 2010 while tripping total operating expenses by a percentage point or two. My view is that open source software offers one way to trim certain licensing fees and possibly some of the restrictions that vendors of proprietary software vendors impose on their customers.

Stephen E Arnold, October 2, 2012

Sponsored by Augmentext

The Blame Game: Civility and the Lousy Economy

September 24, 2012

I read “Frustrated with Poor Mobile Sales, Publishers Blame Ad Agencies.” The main idea is that mobile devices have become the go-to source of information for may people. The article puts the content consumption shift in perspective:

Mobile makes up a fifth of reader traffic for 87 percent of publishers, but only 29 percent of them are seeing the same proportion of revenue come from mobile, according to respondents to a census issued by the UK’s Association of Online Publishers (AOP).

The article includes charts and graphs which point out some of the reasons for the revenue challenge. For example, device fragmentation and expertise are two important factors.

Let’s step back.

There are a number of shifts going on simultaneously. Analysts have to pick one and then try to quantify it or put the shift into perspective. But no one factor is going to fix a problem with revenue shortfall.

Publishers are finding themselves caught like personal computer makers in a situation which is different from the business environment of five years ago. Change is an organization comes slowly. Humans want to learn how to perform a task or tasks smoothly and efficiently. In the last five years, the tension between organizations having systems which keep on ticking while they take a licking have been stressed.

How many people today want to read or can afford a book like this one?

In publishing, the loss of the traditional role as the makers of information has been altered. The publishers have been experimenting, innovating, and developing new products. At the same time, others have been changing, often more quickly and without the friction of cannibalizing revenues from the Old Faithfuls in the product line up. Maybe high value information is for the elite? Money for online content may go to those with the lowest common denominator for value?

Read more

No Wonder Search Is Broken. Software Does Not Work.

September 17, 2012

Several years ago, I ran across a Microsoft centric podcast hosted by an affable American, Scott Hanselman. At the time he worked for a company developing software for the enterprise. Then I think he started working at Microsoft and I lost track of him.

I read “Everything’s Broken and Nobody’s Upset.” The author was Scott Hanselman, who is “a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee.”

The article is a list of bullet points. Each bullet point identifies a range of software problems. Some of these were familiar; for example, iPhoto’s choking on large numbers of pictures on my wife’s new Mac laptop. Others were unknown to me; for example, the lousy performance of Gmail. Hopefully Eric Brewer, founder of Inktomi, can help improve the performance of some Google services.

image

Answer to the Google query “Why are Americans…”

 

The problems, Mr. Hanselman, identifies can be fixed. He writes:

Here we are in 2012 in a world of open standards on an open network, with angle brackets and curly braces flying at gigabit speeds and it’s all a mess. Everyone sucks, equally and completely.

  • Is this a speed problem? Are we feeling we have to develop too fast and loose?
  • Is it a quality issue? Have we forgotten the art and science of Software QA?
  • Is it a people problem? Are folks just not passionate about their software enough to fix it?

I think it’s all of the above. We need to care and we need the collective will to fix it.

My reaction was surprise. I know search, content processing, and Fancy Dan analytics do not work as advertised, as expected, or, in some cases, very well despite the best efforts of rocket scientists.

The idea that the broad world of software is broken was an interesting idea. Last week, I struggled with a client who could not explain what its new technology actually delivered to a user. The reason was that the words the person was using did not match what the new software widget actually did. Maybe the rush to come up with clever marketing catchphrases is more important than solving a problem for a user?

In the three disciplines we monitor—search, content processing, and analytics—I do not have a broad method for remediating “broken” software. My team and I have found that the approach outlined by Martin White and I in Successful Enterprise Search Management is just ignored by those implementing search. I can’t speak for Martin, but my experience is that the people who want to implement a search, content processing or analytics system demonstrate these characteristics. These items are not universally shared, but I have gathered the most frequent actions and statements over the last year for the list. The reason for lousy search-related systems:

  • Short cuts only, please. Another consultant explained that buying third party components was cheaper, quicker, and easier than looking at the existing search related system
  • Something for nothing. The idea is that a free system is going to save the day.
  • New is better. The perception that a new system from a different vendor would solve the findability problem because it was different
  • We are too busy. The belief that talking to the users of a system was a waste of time. The typical statement about this can be summarized, “Users don’t know what they want or need.”
  • No appetite for grunt work. This is an entitlement problem because figuring out metrics like content volume, processing issues for content normalization, and reviewing candidate term lists is not their job or too hard.
  • No knowledge. This is a weird problem caused in part by point-and-click interfaces or predictive systems like Google’s. Those who should know about search related issues do not. Therefore, education is needed. Like recalcitrant 6th graders, the effort required to learn is not there.
  • Looking for greener pastures. Many of those working on search related projects are looking to jump to a different and higher paying job in the organization or leave the company to do a start up. As a result, search related projects are irrelevant.

The problem in search, therefore, is not the technology. Most of the systems are essentially the same as those which have been available for decades. Yes, decades. Precision and recall remain in the 80 percent range. Predictive systems chop down data sets to more usable chunks but prediction is a hit and miss game. Automated indexing requires a human to keep the system on track.

The problem is anchored in humans: Their knowledge, their ability to prioritize search related tasks, their willingness to learn. Net net: Software is not getting much better, but it is prettier than a blinking dot on a VAX terminal. Better? Nah. Upset? Nope, there are distractions and Facebook pals to provide assurances that everything is A-OK.

Stephen E Arnold, September 17, 2012

Sponsored by Augmentext

More on Marketing Confusion in Big Data Analytics

September 11, 2012

Search vendors are like a squirrel dodging traffic. Some make it across the road safely. Others? Well, there is a squirrel heaven I assume. Which search vendors will survive the speeding tractor trailers carrying big data, analytics, and visualization to customers who are famished for systems which make sense of information? I don’t know. No one really knows.

Do squirrels understand high speed, high volume traffic? A happy quack to http://surykatki.blox.pl/html/1310721,262146,14,15.html?7,2007 for a fierce squirrel image.

What is fascinating is to watch the Darwinian process at work among vendors of search and content processing. TextRadar’s “Content Intelligence: An Unexpected Collision Is Coming” makes clear that there are quite a few companies not widely known in the financial and health care markets. Some of these companies have opportunities to make the leap from government contract work to commercial work for Fortune 1000 companies.

But what about more traditional search vendors?

I received in the snail mail a copy of Oracle Magazine. September October 2012. The article which caught my attention was “New Questions, Fast Answers.” The information was in the form of an interview between Rich Schwerin, an Oracle magazine writer, and Paul Sonderegger, senior director of analytics at Oracle. Mr. Sonderegger was the chief strategist at Endeca, which is now part of the Oracle family of companies.

I have followed Endeca since I first learned about the company in 1999, 22 years ago. Like many traditional search vendors, the underlying technical concepts of Endeca date from the salad days of key word search. Endeca’s innovation was to identify concepts either human-assigned or generated by software to group related information. The idea was that a user could run a query and then click on concepts to “discover” information not in the explicit key word match. Endeca dubbed the function “guided navigation” and applied the approach to eCommerce as well as search across the type of information found in a company. The core of the technology was the “Endeca MDEX” engine. At the time of Endeca’s market entrance, there were only a handful of companies competing for enterprise search and eCommerce. In the last two decades the field has narrowed in one sense with the big name companies acquired by larger firms and broadened in another. There are hundreds of vendors offering search, but the majority of these companies use different words to describe indexing and search.

One Endeca executive (Peter Bell) told me in 2005 that the company had been growing at 100 percent each year since 2002.” At the time of the Oracle buy out, I estimated that Endeca had hit about $150 million in revenues. Oracle paid about $1.1 billion for the company or what, if I am accurate, amounts to about 10 times annual revenues. Endeca was a relative bargain compared to Hewlett Packard’s purchase of Autonomy for $10 billion. Autonomy, founded a few years before Endeca, had reached about $850 million in annual revenues, so the multiple on revenues was greater than the Endeca deal. The point is that both of these search giants ranked one and two in enterprise search revenues. Both companies emphasized their technologies’ ability to handle structured and unstructured information. Both Autonomy and Endeca offered business intelligence solutions. In short, both companies had capabilities which some of the newcomers mentioned in the Text Radar article are now touting as fresh and innovative. One key point: It took 22 years for Endeca to hit $150 million and now Oracle has to generate more revenue from the aging Endeca technology. HP has the same challenge with Autonomy, of course. Revenue generation, in my opinion, has been time consuming and difficult. Of the hundreds of vendors past and present, only two have broken the $150 million in revenue barrier. Google and Microsoft would be quick to point out that their search systems are far larger, but these are special cases because it is difficult to unwrap search revenues from other revenue streams.

What does Mr. Sonderegger say in this Oracle Magazine interview. Let me highlight three points and urge you to read the full text of his remarks.

Easy Access

First, business users do not know how to write queries, so “guided navigation” services are needed. Mr. Sonderegger noted:

There has to be some easy way to explore, some way to search and navigate as easily as you do on an e-commerce site.

Most of the current vendors of analytics and findability systems seem to have made the leap from point-and-click to snazzy visualizations. The Endeca angle is that users want to discover and navigate. The companies referenced in the Text Radar story want to make the experience visual, almost video-game like.

Read more

Google Autocomplete: Is Smart Help a Hindrance?

September 10, 2012

You may have heard of the deep extraction company Attensity. There is another company in a similar business with the name inTTENSITY. Not the playful misspelling of the common word “intensity.” What happens when a person looking for the company inTTENSITY get when he or she runs a query on Google. Look at what Google’s autocomplete suggestions recommend when I type intten:

image

The company’s spelling appears along with the less helpful “interstate ten”, “internet explorer ten”, and “internet icon top ten.” If I enter “inten”, I don’t get the company name. No surprise.

image

Is Google’s autocomplete a help or hindrance? The answer, in my opinion, is it depends on the users and what he or she is seeking.

I just read “Germany’s Former First Lady Sues Google For Defamation Over Autocomplete Suggestions.” According to the write up:

When you search for “Bettina Wulff” on Google, the search engine will happily autocomplete this search with terms like “escort” and “prostitute.” That’s obviously not something you would like to be associated with your name, so the wife of former German president Christian Wulff has now, according to Germany’s Süddeutschen Zeitung, decided to sue Google for defamation. The reason why these terms appear in Google’s autocomplete is that there have been persistent rumors that Wulff worked for an escort service before she met her husband. Wulff categorically denies that this is true.

The article explains that autocomplete has been the target of criticism before. The concluding statement struck me as interesting:

In Japan, a man recently filed a suit against Google after the autocomplete feature started linking his names with a number of crimes he says he wasn’t involved in. A court in Japan then ordered Google to delete these terms from autocomplete. Google also lost a similar suit in Italy in 2011.

I have commented about the interesting situations predictive algorithms can create. I assume that Google’s numerical recipes chug along like a digital and intent-free robot.

Read more

Findability and French Search Vendor Names

September 7, 2012

The findability crisis is escalating. There are three reasons. First, companies have ignored the importance of ensuring their company name is findable in the main search indexes. Second, companies have tried to be cute, using odd ball spellings or phrases which are ambiguous. Third, marketers assume that a Web site and some sales people will keep the firm’s identify strong.

I am working of a project which requires me to examine the positioning of several French-owned search, analytics, and content processing companies. I don’t want to isolate any single company, but I want to do a quick run through of what I found when I sat down with a team of researchers who were gathering open source information for me.

Tricky spelling may not help. A happy quack to TheChive.com and http://www.buzzom.com/2010/08/top-5-hilarious-brand-knock-offs-by-china-in-india/

Ami Albert. This is the name I have in my files. The company is now AMI with the tagline “enterprise intelligence software.” A search for AMI on Google for “ami” returned zero links to the company. The older name “Ami Albert” returned a hit to a person named Ami Albert and about six hits down, there was a link to “Ami Software” at the non intuitive url www.amisw.com. The phrase “enterprise intelligence software” returned no hits on the first page of Google results for the “enterprise intelligence software” vendor. Adding “ami” to “enterprise intelligence software” did return links to the company. Conclusion: to find this company in a Web search index one has to know the older name (ami albert) or use the phrase “ami enterprise intelligence software” as a string. Guessing the url is tough because www.amisw.com is short but opaque. No big deal for my researchers. One wonders how many more customers the company could attract if it were more findable.

Antidot. The company uses a variant of “antidote.” If one knows how to spell the name of the company, Google delivers direct links to the company’s home page at www.antidot.net. However, when I ran the query for “antidot” in Yandex, I got the French company and a link to Antidot in Switzerland and www.antidotincl.ch.

Exalead. This is a unique name of a company. The product is called CloudView. A search for Exalead will get the researcher to Exalead’s main site or its search system. However, CloudView does not return a product link to Exalead or its parent Dassault Systèmes. The linkage between the identify of the company and the brand is tenuous. Product name blurring may not be a good thing for some at 3ds.com.

Polyspot. This French vendor dominates the first four hits on Google and maintains this findability across the public Web indexes. The company is also the first four hits on the Chinese government’s Chinese language search engine Jike.com. Another company uses the same name “polyspot.” The search vendor pushes down the plastic dot company, so semantic conflict is resolved in favor of Polyspot. Obviously Polyspot is doing something that the other French vendors are not in terms of brand.

Sinequa. This is a Latin phrase meaning something that is indispensable. There is a drug called “sinequan” which Google displays as an autocomplete option. The French vendor comes up as the top hit in Google. The company uses the phrase “unified information access,” which is also used by Attivio. In this naming example, two vendors roughly in the same business sector are using the identical phrase to describe their business. What is interesting is that Attivio, not Sinequa, has the strongest semantic grip on this phrase.

Read more

SharePoint Feels the Heat

September 4, 2012

I know there are quite a few companies who depend upon, integrate with, and otherwise cheerlead for Microsoft SharePoint. Heck, there are consultants a-plenty who tout their “expertise” with SharePoint. The problem is that some folks are not taking advantage of SharePoint’s glories. There are also some, if the data in “Most Popular Content Management Systems by Country” are accurate, who may never embrace SharePoint’s wonderfulness.

The write up appeared in W3Tech and makes clear that the top dog in content management is WordPress, followed by Joomla. Both of these are open source systems. The article asserts:

WordPress, as the most popular CMS overall, also dominates this picture. It is the number one system in most countries in North and South America, Europe and Oceania, many countries in Asia including Russia and India, and surprisingly few countries in Africa. Joomla dominates a fair number of countries in Africa, for example Algeria, Morocco and Nigeria, several countries in Central and South America, such as Venezuela, Ecuador and Cuba, two countries in Europe, Greece and Bosnia, as well as Afghanistan and a number of other countries in Asia.

ostrich sign 1

Are SharePoint centric vendors ignoring the market shifts in content management and search?

So where is SharePoint popular? Where do companies like BA Insight, Concept Searching, dtSearch, Recommind, SurfRay, and dozens upon dozens of other SharePoint supporters need to focus their sales efforts? According to W3Techs:

SharePoint is the number one system in Saudi Arabia, Egypt, Qatar and Lebanon as well as on .mil sites, which again don’t show up as separate country in our chart.

And China? Bad news. W3Tech says:

Discuz is a Chinese system that dominates its home market with 49% market share, but is not so much used outside China.

Thank goodness for Skype and Webex. A sales call and conference visit in these countries can whack an expense budget.

Many stakeholders in search and content processing companies believe that SharePoint as a market will keep on growing and growing. That may indeed happen. However, SharePoint centric vendors are likely to find themselves forced to pivot. At this time, a couple of search and content processing vendors have begun the process. Many have not, and I think that as the cost of sales and marketing rises, investors will want to learn about “life after SharePoint.”

How quickly will this message disseminate? Paddling around in Harrod’s Creek, I think that some companies will continue to ride the SharePoint bandwagon. That’s okay, but the “sudden pivot” which Vivisimo is trying to pull off with its “big data” positioning can leave some people confused.

SharePoint has been a money machine for third parties and consultants for a long time. The history of SharePoint is rarely discussed. The focus is on making the system work. That approach was a money maker when there was strong cash flow and liberal credit. As organizations look for ways to cut costs, open source content management systems seem to be taking hold. We are now tracking these important market shifts in our new service Text Radar.

If the W3Tech data are incorrect, the SharePoint vendors with their assertions about smart algorithms and seamless integration will blast past Autonomy’s record search revenues of almost $1 billion. But most search vendors are not Autonomy and are likely to be mired in the $3 to $15 million range where the vast majority of search and content processing vendors dwell.

Could the future be open source and high value, for fee add ons that deliver a solid punch for licensees? We have analyzed the open source search and content processing sector for IDC, and open source as an alternative to SharePoint content management, content processing, and search may have some wind in its sales. How many SharePoint centric vendors will generate a hundred million in revenue this year? Perhaps zero?

Stephen E Arnold, September 4, 2012

Sponsored by Augmentext

Google Updates the Portal from 1996: Info on Multiple Devices

August 30, 2012

The portal never really died. AOL and Yahoo have kept the 1990s “next big thing” despite the financial penalties the approach has imposed on stakeholders. There are other portals which are newer versions of the device which slices, dices, and chops. Examples I have looked at include:

  • NewsIsFree, which delivers headlines, alerts, and allows me to find “sources”
  • WebMD, which is a consumer “treat thyself” and aggregation information portal
  • AutoTrader, which provides a vehicle research, loan, and purchasing portal.

Google when it rolled out 13 years ago took advantage of search systems’ desire to go “beyond search.” The reasons were easy to identify. Over the years, I have enumerated them many times. Google’s surge was due to then-search giants looking for a way to generate enough revenue to pay for the cost of indexing content. Now there are some crazy ideas floating around “real” consultant cubicles that search is cheap and easy.

Next Generation Portals: Gate to Revenue Hell?

Fear not, lads and lasses.

Search is brutally expensive and—guess what?—the costs keep on rising. The principal reasons are that systems need constant mothering and money. No, let’s not forget the costs which MBAs often trivialize. These include people who can make the system work, run faster, remain online, and keep pace with technology. Telecommunications, power, hardware, and a number of other “trivial” items gobble money like a 12 year old soccer player after practice chowing down on junk food.

Next Generation Portals: Gate to Revenue Heaven?

Portals promised to be “sticky”, work like magnets and pull more users, and provide a platform for advertising. Portals were supposed to make money when search generated modest amounts of money. First Overture, then Yahoo, and finally Google realized that the pursuit of objectivity was detrimental to selling traffic. Thus, online pay-to-play programs took off. The portals with a lead like Yahoo fumbled the ball. The more clever Googlers grabbed the melon and kept going back to the farmer’s garden for more. Google had, it appeared, figured out how to remain a search system and make lots of money.

No more.

Do we now witness the portalization of Google? Is the new twist is that the Google portal will require the user to have multiple devices? Will each device connects to Google to show more portal advertising goodness?

There is a popular impression among some MBAs on Wall Street and “real” consultants that Google is riding the same old money rocket in did in 2004 to 2006. My view is different.

Read more

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta