Is New Math Really New Yet?

July 21, 2014

I read “Scientific Data Has Become So Complex, We Have to Invent New Math to Deal With It.” My hunch is that this article will become Google spider food with a protein punch.

In my lectures for the police and intelligence community, I review research findings from journals and my work that reveal a little appreciated factoid; to wit: The majority of today’s content processing systems use a fairly narrow suite of numerical recipes that have been embraced for decades by vendors, scientists, mathematicians, and entrepreneurs. Due to computational constraints and limitations of even the slickest of today’s modern computers, processing certain data sets is a very difficult and expensive in humans, programming, and machine time job.

Thus, the similarity among systems comes from several factors.

  1. The familiar is preferred to the onerous task of finding a slick new way to compute k-means or perform one of the other go-to functions in information processing
  2. Systems have to deliver certain types of functions in order to make it easy for a procurement team or venture oriented investor to ask, “Does your system cluster?” Answer: Yes. Venture oriented investor responds, “Check.” The procedure accounts for the sameness of the feature lists between Palantir, Recorded Future, and simile systems. When the similarities make companies nervous, litigation results. Example: Palantir versus i2 Ltd. (now a unit of IBM).
  3. Alternative methods of addressing tasks in content processing exist, but they are tough to implement in today’s computing systems. The technical reason for the reluctance to use some fancy math from my uncle Vladimir Ivanovich Arnold’s mentor Andrey Kolmogorov is that in many applications the computing system cannot complete the computation. The buzzword for this is P=NP? Here’s MIT’s 2009 explanation
  4. Savvy researchers have to find a way to get from A to B that works within the constraints of time, confidence level required, and funding.

The Wired article identifies other hurdles; for example, the need for constant updating. A system might be able to compute a solution using fancy math on a right sized data set. But toss in constantly updating information and the computing resources often just keep getting hungrier for more storage, bandwidth, and computational power. Then the bigger the data, the computing system has to shove that data around. As fast as an iPad or modern Dell notebook seems, the friction adds latency to a system. For some analyses, delays can have significant repercussions. Most Big Data systems are not the fleetest of foot.

The Wired article explains how fancy math folks cope with these challenges:

Vespignani uses a wide range of mathematical tools and techniques to make sense of his data, including text recognition. He sifts through millions of tweets looking for the most relevant words to whatever system he is trying to model. DeDeo adopted a similar approach for the Old Bailey archives project. His solution was to reduce his initial data set of 100,000 words by grouping them into 1,000 categories, using key words and their synonyms. “Now you’ve turned the trial into a point in a 1,000-dimensional space that tells you how much the trial is about friendship, or trust, or clothing,” he explained.

Wired labels this approach as “piecemeal.”

The fix? Wired reports:

the big data equivalent of a Newtonian revolution, on par with the 17th century invention of calculus, which he [Yalie mathematician Ronald Coifman] believes is already underway.

Topological analyses and sparsity,  may offer a path forward.

The kicker in the Wired story is the use of the phrase “tractable computational techniques.” The notion of “new math” is an appealing one.

For the near future, the focus will be on optimization of methods that can be computed on today’s gizmos. One widely used method in Autonomy, Recommind, and many other systems originates with Sir Thomas Bayes who died in 1761. My relative died 2010. I understand there were some promising methods developed after Kolmogorov died in 1987.

Inventing new math is underway. The question is, “When will computing systems become available to use these methods without severe sampling limitations?” In the meantime, Big Data keep on rolling in, possibly mis-analyzed and contributing to decisions with unacceptable levels of risk.

Stephen E Arnold, July 21, 2014

Harvard Professors Brawl of Words over Disruptive Innovation

July 21, 2014

The article titled Clayton Christensen Responds to New Yorker Takedown of ‘Disruptive Innovation’ on Businessweek consists of an interview with Christensen and his thoughts on Jill Lepore’s article. Two Harvard faculty members squabbling is, of course, fascinating, and Christensen defends himself well in this article with his endless optimism and insistence on calling Lepore “Jill.” The article describes disruptive innovation and Jill Lepore’s major problems with it as follows,

“The theory holds that established companies, acting rationally and carefully to stay on top, leave themselves vulnerable to upstarts who find ways to do things more cheaply, often with a new technology….Disruption, as Lepore notes, has since become an all-purpose rallying cry, not only in Silicon Valley—though especially there—but in boardrooms everywhere. “It’s a theory of history founded on a profound anxiety about financial collapse, an apocalyptic fear of global devastation, and shaky evidence,” she writes.”

Christensen refers Lepore to his book, in which he claims to answer all of her refutations to his theory. He, in his turn, takes issue with her poor scholarship, and considers her as trying to discredit him rather than work together to improve the theory through conversation and constructive criticism. In the end of the article he basically dares Lepore to come have a productive meeting with him. Things might get awkward at the Harvard cafeteria if these two cross paths.

Chelsea Kerwin, July 21, 2014

Sponsored by ArnoldIT.com, developer of Augmentext

I2E Semantic Enrichment Unveiled by Linguamatics

July 21, 2014

The article titled Text Analytics Company Linguamatics Boosts Enterprise Search with Semantic Enrichment on MarketWatch discusses the launch of 12E Semantic Enrichment from Linguamatics. The new release allows for the mining of a variety of texts, from scientific literature to patents to social media. It promises faster, more relevant search for users. The article states,

“Enterprise search engines consume this enriched metadata to provide a faster, more effective search for users. I2E uses natural language processing (NLP) technology to find concepts in the right context, combined with a range of other strategies including application of ontologies, taxonomies, thesauri, rule-based pattern matching and disambiguation based on context. This allows enterprise search engines to gain a better understanding of documents in order to provide a richer search experience and increase findability, which enables users to spend less time on search.”

Whether they are spinning semantics for search, or if it is search spun for semantics, Linguamatics has made their technology available to tens of thousands of users of enterprise search. Representative John M. Brimacombe was straightforward in his comments about the disappointment surrounding enterprise search, but optimistic about 12E. It is currently being used by many top organizations, as well as the Food and Drug Administration.

Chelsea Kerwin, July 21, 2014

Sponsored by ArnoldIT.com, developer of Augmentext

Consulting Content Marketing: The Value of a Name

July 20, 2014

One of my readers sent me a link to this IDC report on Amazon. If you cannot read the image, here’s the link verified on July 20, 2014.

image

Now check out the price of $500. The author is a former IDC expert, Sue Feldman.

Now check out this IDC report on Amazon and note that the price for my work and that of my researchers is $3,500. Notice that Ms. Feldman’s name is on the report. I don’t know if she was employed at IDC when my work was posted on Amazon without my permission. There is one new IDC “expert” name: Dave Schubmehl, a former OpenText and Janya executive. Also, my name is listed almost as an extra.

image

This is an archived article. IDC removed the report from the Amazon Web site shortly before this update was written.

I wonder if my name and my team’s contribution delivered up to 7X value or was Dave Schubmehl’s contributions the reason for the price boost. What’s clear is that IDC is taking content, using my name, selling reports with my name, and then deleting documents in a stepwise manner.

Fascinating.

In any event, thanks to my reader and a pointed reminder to anyone purchasing consulting firm content marketing, find out who provided the information. I would suggest that my team obviously has some value because the former IDC professional’s work was a comparative bargain at $500.

Contracts for reuse of another’s work? No.

Permission to resell my research on Amazon? No.

Payments, sales reports, follow through? No.

What’s that say about well known consulting firm behavior? Exploiting a 70 year old and his research team is one more example of a lapse in common sense, fair play, and corporate governance. Does this seem like a smaller scale version of the Google X Labs’ Forrest Hayes’ matter? I leave you to consider the question and your answer.

Stephen E Arnold, July 20, 2014

Search and Data-Starved Case Studies

July 19, 2014

LinkedIn discussions fielded a question about positive search and content processing case studies. I posted a link to a recent paper from Italy (you can find the url at this link).

My Overflight system spit out another case study. The publisher is Hewlett Packard and the example involves Autonomy. The problem concerns the UK’s National Health Service” and its paperless future. You can download the four page document at http://bit.ly/1wIsifS.

The Italian case study focuses on cheerleading for the Google Search Appliance. The HP case study promotes the Autonomy IDOL system applied to medical records.

the HP Autonomy document caught my attention because it uses a buzzword I first heard at Booz, Allen & Hamilton in 1978. Harvey Poppel, then a BAH partner, coined the phrase. The idea caught on. Mr. Poppel, who built a piano, snagged some ink in Business Week. That was a big deal in the late 1970s. Years later I met Alan Siegel, a partner at a New York design firm. He was working on promotion of the Federal government’s paperless initiative. About 10 years ago, I spent some time with Forrest (Woody) Horton, who was a prominent authority on the paperless office. Across the decades, talk about paperless offices generated considerable interest. These interactions about paperless environments have spanned 36 years. Paper seems to be prevalent wherever I go.

When I read the HP Autonomy case study, I thought about the efforts of some quite bright individuals directed at eliminating hard copy documents. There are reports, studies, and analyses about the problems of finding information in paper. I expected a reference to hard data or some hard data. The context for the paperless argument would have captured my attention.

The HP Autonomy case study talks about an integrator’s engineers using IDOL to build a solution. The product is called Evolve and:

It sued 28 years of information management expertise to improve efficiency, productivity and regulatory compliance. The IDOL analytics engine was co-opted into Evolve because it automatically ingests and segments medical records and documents according to their content and concepts, making it easier to find and analyze specific information.

The wrap up of the case study is a quote that is positive about the Kainos Evolve system. No big surprise.

After reading the white paper, three thoughts crossed my mind.

First, the LinkedIn member seeking positive search and content processing case studies might not find the IDOL case study particularly useful. The information is more of an essay from an ad agency generated in-house magazine.

Second, the LinkedIn person wondered why there were so few positive case studies about successful search and content processing installations. I think there are quite a few white papers, case studies, and sponsored content marketing articles crafted along the lines of the HP Autonomy case study. The desire to give the impression that the product encounters no potholes scrubs out the details so useful to a potential licensee.

Third, the case study describes a mandated implementation. So the Evolve product is in marketing low gear. The enthusiasm for implementing a new product shines brightly. Does the glare from the polish obscure a closer look.

At a minimum, I would have found the following information helpful even if presented in bullet points or tabular form:

  1. What was the implementation time? What days, weeks, or months of professional work were required to get the system up and running?
  2. What was the project’s initial budget? Was the project completed within the budget parameters?
  3. What is the computing infrastructure required for the installation? Was the infrastructure on premises, cloud, or hybrid?
  4. What is the latency in indexing and query processing?
  5. What connectors were used “as is”? Were new connectors required? If yes, how long did it take to craft a functioning connector?
  6. What training did users of the system require?

Information at this level of detail is difficult to obtain. In my experience, most search and content processing systems require considerable attention to detail. Take a short cut, and the likelihood of an issue rises sharply.

Obviously neither the vendor nor the licensee want information about schedule shifts, cost over or under- runs and triage expenses to become widely known. The consequence of this jointly enforced fact void helps create case studies that are little more than MBA jargon.

Little wonder the LinkedIn member’s plea went mostly ignored. Paper is unlikely to disappear because lawyers thrive on hard copies. When litigation ensues, the paperless office and the paperless medical practice becomes a challenge.

Stephen E Arnold, July 19, 2014

What Most Search Vendors Cannot Pull Off

July 19, 2014

I recently submitted an Information Today column that reported about Antidot’s tactical play to enter the US market. One of the fact checkers for the write up alerted me that most of the companies I identified were unknown to US readers. Test yourself. How many of these firms do you recognize? How many of them provide information retrieval services?

  • A2ia
  • Albert (originally AMI Albert and AMI does not mean friend)
  • Dassault Exalead
  • Datops
  • EZ2Find
  • Kartoo
  • Lingway
  • LUT Technologies
  • Pertimm
  • Polyspot
  • Quaero
  • Questel
  • Sinequa

How did you do? The point is that French vendors of information retrieval and content processing technology find themselves in a crowded boat. Most of the enterprise search vendors have flamed out or resigned themselves to pitching to venture capitalist that their technology is the Next Big Thing. A lucky few sell out and cash in; for example Datops. Others are ignored or forgotten.

The same situation exists for vendors of search technology in other countries. Search is a tough business. And when former Googlers like Marissa Meyer was the boss when Yahoo’s share of the Web search market sagged below 10 percent. In the same time period, Microsoft increased Bing’s share to about 14 percent. Google dogpaddled and held steady. Other Web search providers make up the balance of the market players. Business Insider reported:

This is a big problem for Yahoo since its search business is lucrative. While Yahoo’s display ad business fell 7% last quarter, revenue from search was up 6% on a year-over-year basis. Revenue from search was $428 million compared to $436 million from its display ad business.

Now enterprise search vendors have been trying to use verbal magic to unlock consistently growing revenue. So far only two vendors have been able to find a way to open the revenue vault’s lock. Autonomy tallied more than $800 million in revenue at the time of its sale to Hewlett Packard. The outcome of that deal was a multi-billion dollar write off and many legal accusations. One thing is clear through the murky rhetoric the deal produced. Hewlett Packard had zero understanding of search and has been looking for a scapegoat to slaughter for its corporate decision. This is not helping the search vendors chasing deals.

Google converted Web search into a $60 billion revenue stream. The fact that the core idea for online advertising originated with the pay-to-play company GoTo which then morphed into Overture which THEN was acquired by Yahoo. Think of the irony. Yahoo has the technology that makes Google a one trick, but very lucrative revenue pony. But, to be fair, Google Web search is not the enterprise search needed to locate a factoid for a marketing assistant. Feed this query “how me the versions of the marketing VP’s last product road map” to a Google appliance and check the results. The human has to do some old fashioned human-type work. To find this information with a Google Search Appliance or any other information retrieval engine for that matter is tricky. Basic indexing cannot do the job, so most marketing assistants hunt manually through files, folders, and hard copies looking for the Easter egg.

Many of the pioneering search engines tried explaining their products and services using euphemisms. There was question answering, content intelligence, smart content, predictive retrieval, entity extraction, and dozens and dozens of phrases that sound fine but are very difficult to define; for example, knowledge management and the phrase “enterprise search” itself or “image recognition” or “predictive analytics”, among others.

I had a hearty chuckle when I read “Don’t Sell a Product, Sell a Whole New Way of Thinking.” Search has been available for at least 50 years. Think RECON, Orbit, Fulcrum Technologies, BASIS, Teratext, and other artifacts of search and retrieval. Smart folks cooked up even the computationally challenged Delphes system, the metasearch system Vivisimo, and the essentially unknown Quertle.

A romp through these firm’s marketing collateral, PowerPoints, and PDFs makes clear that no buzzword has been left untried. Buyers did and do not know what the systems actually delivered.  This is evidence that search vendors have not been able to “sell a whole new way of thinking.”

No kidding. The synonyms search marketers have used in order to generate interest and hopefully a sale are a catalog of information technology jargon. Here is a short list of some of the terms from the 1990s:

  • Business intelligence
  • Competitive intelligence
  • Content governance
  • Content management
  • Customer support then customer relationship management.
  • Knowledge management
  • Neurodynamics
  • Text analytics

If I accept the Harvard analysis, the failing of enterprise search is not financial fiddling and jargon. As you may recall, Microsoft paid $1.2 billion for Fast Search & Transfer. The investigation into allegations of financial fancy dancing were resolved recently with one executive facing a possible jail term and employment restrictions. There are other companies that tried to blend search with content only to find that the combination was not quite like peanut butter and jelly. Do you use Factiva or Ebsco? Did I hear a “what?’ Other companies embraced slick visualizations to communicate key information at a glance. Do you remember Grokker? There was semantic search. Do you recollect Siderean Software.

One success story was Oingo, renamed Applied Semantics. Google understood the value of mapping words to ads and purchased the company to further its non search goals of generating ad revenue.

According to the HBR:

To find the shift, ask yourself a few questions. What was the original insight that led to the innovation? Where do you feel people “don’t get it” about your solution? What is the “aha” moment when someone turns from disinterested to enthusiastic?

Those who code up search systems are quite bright. Is this pat formula of shifting thinking the solution to the business challenges these firms face:

Attivio. Founded by Fast Search & Transfer alums, the company has ingested more than $35 million in venture funding. The company’s positioning is “an actionable 360 degree view of anything you need.” Okay. Dassault Exalead used the same line several years.

Coveo. The company has tapped venture firms for more than $30 million since the firm’s founding in 2004, Coveo uses the phrase “enterprise search” and wraps it in knowledge workers, custom service, engineering, and CRM. The idea is that Coveo delivers solutions tailored to a specific business functions and employee roles.

SRCH2. This is a Xoogler founded company that like Perfect Search before emphasizes speed. The alternative is better than open source search solutions.

Lucid Works. Like Vivisimo, Lucid Works has embraced Big Data and the cloud. The only slow downs Lucid has encountered has been turnover in CEOs, marketing, and engineering professionals. The most recent hurdle to trip up Lucid is the interest in ElasticSearch, fat with almost $100 million in venture funding and developers from the open source community.

IBM Watson. Based on open source and home grown technology, IBM’s marketers have showcased Watson on Jeopardy and garnered headlines for the $1 billion investment IBM is making in its “smart” information processing system. The most recent demonstration of Watson was producing a recipe for Bon Appetit readers.

Amazon’s search approach is to provide it as a service to those using Amazon Web services. Search is, in my mind, just a utility for Amazon. Amazon’s search system on its eCommerce site is not particularly good. Want to NOT out books not yet available on the system. Well, good luck with that query.

After I stopped chuckling, I realized that the Harvard article is less concerned with precision and recall than advocating deception, maybe cleverness. No enterprise search vendor has approached Autonomy’s revenues with the sole exception of Google’s licensing of the wildly expensive Google Search Appliance. At the time of its sale to Oracle, Endeca was chugging along at an estimated $150 million in revenue. Oracle paid about $1 billion for Endeca. With that benchmark, name another enterprise search vendor or eCommerce search vendor that has raced past Endeca. For the majority of enterprise search vendors, revenues of $3 to $10 million represent very significant achievements.

An MBA who takes over an enterprise search company may believe that wordsmithing will make sales. Sure, some sales may result but will the revenue be sustainable. Most enterprise search sales are a knee jerk to problems with the incumbent search system.

Without concrete positive case studies, talking about search is sophistry. There are comparatively few, specific, return on investment analyses for enterprise seach installations. I provided a link to a struggling LinkedIn person about an Italian library’s shift from the 1960s BASIS system to a Google Search Appliance.

Is enterprise search an anomaly in business software. Will the investment firms get their money back from their investments in search and retrieval?

Ask a Harvard MBA steeped in the lore of selling a whole new way of thinking. Ignore 50 years of search history. Success in search is difficult to achieve. Duplicity won’t do the job.

Stephen E Arnold, July 19, 2014

Google and Microsoft Shuffle the Deck

July 18, 2014

Each company is using different card tricks.

I see a common theme in the termination of employees at Microsoft and the management redeal at Google.

I read “Beyond 12,500 Former Nokia Employees, Who Else Is Microsoft Laying Off?” I am okay with a Microsoft watcher point out that not just Nokia staff getting the axe. The comment that caught my attention reveals how serious a problem Microsoft faces. Here’s the passage I noted:

Under the new structure, a number of Windows engineers, primarily dedicated testers, will no longer be needed….Instead, program managers and development engineers will be taking on new responsibilities, such as testing hypotheses. The goal is to make the OS team work more like lean startups than a more regimented and plodding one adhering two- to three-year planning, development, testing cycles.

As I understand this, a company almost four decades into its life cycle wants to be “like lean start ups”. I am not sure if my experience is similar to that of other professionals, but working with fewer people does not equal a start up. In a start up, life is pretty crazy. Need a purchase order? Well, someone has to work up that system. Need to get reimbursed for that trade show party? No problem we’ll get a check cut. Over time, humans get tired of crazy and set up routines, systems, and procedures. The thrill of a start up is going to be difficult to emulate at Microsoft.

That’s the core problem. Microsoft has missed or just plain failed with Internet search, unified experiences across devices, online advertising, enterprise search, and improving is core applications. Adding features that a small percentage of users try is not innovation. Microsoft is no longer a start up and firing people will not make it one. Microsoft is an aircraft carrier that takes a long time to turn, to stop, and redirect. Microsoft has to demonstrate to its stakeholders that it is taking purposeful action. Firing thousands of people makes headlines. It does not create new products, services, or meaningful innovations. IBM has decided that throwing billions of dollars at project that “could” deliver big revenue is almost as wild and wooly.

Now to Google. The company reported its quarterly earnings. Cheerleaders for the company point to growth in ad revenue. The New York Times states:

Google’s revenue for the quarter was $15.96 billion, an increase of 22 percent over the year-ago quarter.

Tucked into the article were several comments I marked as indicators of the friction Google faces:

ITEM: “The price that advertisers pay each time someone clicks on an ad — or “cost per click,” in Google talk — dropped 6 percent from the year-ago quarter, largely because of the shift to increased mobile advertising.”

ITEM: “Mobile, however, is something that Facebook seems to have cracked. The social media giant accounted for almost 16 percent of mobile advertising dollars spent around the world last year, eMarketer estimates, up from 9 percent in 2012. Google dropped to a 41.5 percent share of the mobile ad market last year, down from 49.8 percent in 2012.”

ITEM: ““There’s a little bit of concern in the markets that there’s some drunken spending going on,” said Mark Mahaney, an Internet analyst with RBC Capital Markets.”

The New York Times’ article omitted one point I found interesting:

Excluding its cost of revenue, Google’s core expenses in the second quarter jumped 26 percent from last year. Source: http://bit.ly/Uf8JPM.

The Google “core expenses” are creeping up. Amazon has this problem as well. Is there a reason to worry about the online ad giant? Not for me. But the “drunken spending” comments, while clever, have the ring of truth. Then the swift departure of Glass director Babak Parviz (Amir Parviz, Amirparviz, or Parvis) suggests disenchantment somewhere between the self assembly wizard and Goggle management. After a decade of effort, Google has yet to demonstrate that it can create a no advertising revenue stream of significant magnitude for a $60 billion a year company.

Microsoft’s and Google’s recent actions make clear that both companies are trying to adapt to realities of today’s market. Both companies are under increasing pressure to “just make it work.” Three card Monte

Stephen E Arnold, July 18, 2014

The Discovery of the “Adversarial” Image Blind Spot in Neural Networks

July 18, 2014

The article titled Does Deep Learning Have Deep Flaws on KDnuggets explains the implications of the results to a recent study of neural networks and image classification. The study, completed by Google, NYU and the University of Montreal, found that an as yet unknown flaw exists in neural networks when it comes to recognizing images that may be identical to the human eye. Neural networks can generate misclassified “adversarial” images that look exactly the same as a correctly classified image. The article goes on to explain,

“The network may misclassify an image after the researchers applied a certain imperceptible perturbation. The perturbations are found by adjusting the pixel values to maximize the prediction error. For all the networks we studied (MNIST, QuocNet, AlexNet), for each sample, we always manage to generate very close, visually indistinguishable, adversarial examples that are misclassified by the original network… The continuity and stability of deep neural networks are questioned. The smoothness assumption does not hold for deep neural networks any more.”

The article makes this statement and later links it to the possibility of these “adversarial” images existing even in the human brain. Since the study found that one perturbation can cause misclassification in separate networks, trained for different datasets, it suggests that these “adversarial” images are universal. Most importantly, the study suggests that AI has blind spots that have not been addressed. They may be rare, but as our reliance on technology grows, they must be recognized and somehow accounted for.

Chelsea Kerwin, July 18, 2014

Sponsored by ArnoldIT.com, developer of Augmentext

Jepsen-Testing Elasticsearch for Safety and Data Loss

July 18, 2014

The article titled Call Me Mayble: Elasticsearch on Aphyr explores potential issues with Elasticsearch. Jepsen is a section of Aphyr that tests the behaviors of different technology and software under types of network failure. Elasticsearch comes with the solid Java indexing library of Apache-Lucene. The article begins with an overview of how Elasticsearch scales through sharding and replication.

“The document space is sharded–sliced up–into many disjoint chunks, and each chunk allocated to different nodes. Adding more nodes allows Elasticsearch to store a document space larger than any single node could handle, and offers quasilinear increases in throughput and capacity with additional nodes. For fault-tolerance, each shard is replicated to multiple nodes. If one node fails or becomes unavailable, another can take over…Because index construction is a somewhat expensive process, Elasticsearch provides a faster database backed by a write-ahead log.”

Over a series of tests, (with results summarized by delightful Barbie and Ken doll memes) the article decides that while version control may be considered a “lost cause” Elasticsearch handles inserts superbly. For more information on how Elasticsearch behaved through speed bumbs, building a nemesis, nontransitive partitions, needless data loss, random and fixed transitive partitions, and more, read the full article. It ends with recommendations for Elasticsearch and for users, and concedes that the post provides far more information on Elasticsearch than anyone would ever desire.

Chelsea Kerwin, July 18, 2014

Sponsored by ArnoldIT.com, developer of Augmentext

Bing Search: Will It Be Forgotten

July 17, 2014

I read “Bing Implements ‘Right to Be Forgotten’ Ruling, Asks Applicants ‘Are You Famous?’” My reaction to is that search is Google. Microsoft wants to be compliant with the European Union. The Register took a different view of the situation. In a story “Forgotten Bing Responds to Search Index ECJ Rule: Hello? Remember Us?” People don’t want to be in the Google index but don’t seem to think about Bing index.

Microsoft wants respect. “Microsoft to Cut Up to 18,0000 Jobs over Next Year.” some of these soon0to-be-RIFed employees may create blogs and other online content. Microsoft executives may want to make some content about itself go away.

The comment by Daniel Ives, an analyst with FRB Capital Markets, explain the situation like this:

“Under the Ballmer era, there were many layers of management and a plethora of expensive initiatives being funded that has thus hurt the strategic and financial position the company is in, especially in light of digesting the Nokia acquisition,” says Ives. “Nadella is using today as an opportunity to make sure that Microsoft is ready and well positioned to embark on its next chapter of growth around mobile and cloud.”

What strikes me is that the observation can apply to Amazon and Google equally well. As these companies expand, generating new revenues and delivering meaningful profit becomes more and more difficult.

Microsoft’s plight may be a harbinger for other firms as well. Search is a bit of a muddle at Microsoft and time may be running out for Bing to become a substantial contributor to Microsoft’s financial position.

Stephen E Arnold, July 17, 2014

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta