Enterprise Search: Roasting Chestnuts in the Cloud
March 6, 2015
I read “Seeking Relevancy for Enterprise Search.” I enjoy articles about “relevance.” The word is ambiguous and must be disambiguated. Yep, that’s one of those functions that search vendors love to talk about and rarely deliver.
The point of the write up is that enterprise content should reside in the cloud. The search system can then process the information, build an index, and deliver a service that allows a single search to output a mix of hits.
Sounds good.
My concern is that I am not sure that’s what users want. The reason for my skepticism is that the shift to the cloud does not fix the broken parts of information retrieval. The user, probably an employee or consultant authorized to access the search system, has to guess which keywords unlock the information in the index.
Search vendors continue to roast the chestnuts of results lists, keyword search, and work arounds for performance bottlenecks. The time is right to move from selling chestnuts to those eager to recapture a childhood flavor and move to a more efficient information delivery system. Image source: http://www.mybalkan.org/weather.html
That’s sort of a problem for many searchers today. In many organizations, users express frustration with search because multiple queries are needed to find information that seems relevant. Then the mind numbing, time consuming drudgery begins. The employee opens a hit, scans the document, copies the relevant bit if it is noted in the first place, and pastes the item into a Word file or a OneNote type app and repeats the process. Most users look at the first page of results, pick the most likely suspect, and use that information.
No, you say.
I suggest you conduct the type of research my team and I have been doing for many years. Expert searchers are a rare species. Today’s employees perceive themselves as really busy, able to make decisions with “on hand” information, and believe themselves to be super smart. Armed with this orientation, whatever these folks do is, by definition, pretty darned good.
It is not. Just don’t try telling a 28 year old that she is not a good searcher and is making decisions without checking facts and assessing the data indexed by a system.
What’s the alternative?
My most recent research points to a relatively new branch or tendril of information access. I use the term “cyberosint” to embrace systems that automatically collect, analyze, and output information to users. Originally these systems focused on public content like Facebook, Twitter posts, and Web content. Now the systems are moving inside the firewall.
The result is that the employee interacts with reports generated with information presented in the form of answers, a map with dynamic data showing where certain events are now taking place, and in streams of data that go into other systems such as a programmatic trading application on Wall Street.
Yes, keyword search is available to these systems which can be deployed on premises, in the cloud, or in a hybrid deployment. The main point is that the chokehold of keyword search is broken using smart software, automatic personalization, and reports.
Keyword search is not an enterprise application. Vendors describe the utility function as the ringmaster of the content circus. Traditional enterprise search is like a flimsy card table upon which has been stacked a rich banquet of features and functions.
The card table cannot support the load. The next generation information access systems, while not perfect, represent a significant shift in information access. To learn more, check out my new study, CyberOSINT.
Roasting chestnuts in the cloud delivers the same traditional chestnut. That’s the problem. Users want more. Maybe a free range, organic gourmet burger?
Stephen E Arnold, March 6, 2015
Enterprise Search: Is Keyword Search a Lycra-Spandex Technology?
March 3, 2015
I read a series of LinkedIn posts about why search may be an enterprise application flop. To access the meanderings of those who believe search is a young Bruce Jenner, you will have to sign up for LinkedIn and then wrangle an invitation to this discussion. Hey, good luck with this access to LinkedIn thing.
Over the years, enterprise search has bulked up. The keyword indexing has been wrapped in layers of helper code. For example, search now classifies, performs work flows operations, identifies entities, supports business intelligence dashboards, delivers self service Web help, handles Big Data, and dozens of other services.
Image Source: www.sochealth.co.uk.
I have several theories about this chubbification of keyword search. Let me highlight the thoughts that I jotted down as I worked through the “flop” postings on LinkedIn.
First, keyword search is not particularly useful to some people looking for information in an organization. The employee has to know what he or she needs and the terminology to use to unlock the secrets of the index. Add some time pressure and keyword search becomes infuriating. The fix, which began when Fulcrum Technologies pitched a platform approach to search, was to make search a smaller part of a more robust information management solution. You can still buy pieces of the original 1980s Fulcrum technology from OpenText today.
Second, system users continue to perceive results list as a type of homework. The employee has to browse the results list, click on documents that may contain the needed information, scan the document, identify the factoid or paragraph needed, copy it to another document, and then repeat the process. Employees want answers. What better way to deliver those answers than a “point and click” interface? Just pick what one needs and be done with the drudgery of the keyword search.
Third, professionals working in organizations want to find information from external sources like Web pages and blogs and from internal sources such as the server containing the proposals or president’s PowerPoint presentations. Enterprise search is presented as a solution to information access needs. The licensee quickly learns that most enterprise search systems require money, engineers, and time to set up so that content from disparate sources can be presented from a single interface. Again employees grouse when videos from YouTube and from the training department are not in the search results. Some documents containing needed information are not in the search system’s index but a draft version of the document is available via a Bing or Google search.
Fourth, the enterprise search system built on keywords lacks intelligence. For many vendors the solution is to add semantic intelligence, dynamic personalization which figures out what an employee needs by observing his information behaviors, and predictive analytics which just predicts what is needed for the company, a department and an individual.
Fifth, vendors have emphasized that a smart organization must have a taxonomy, a list of words and concepts tailored to the specific organization. These terms enrich the indexing of content. To make taxonomy management easy as pie, search vendors have tossed in editorial controls for indexing, classification, and hit boosting so that certain information appears whether the employee asked for the data or not.
In short order, the enterprise search system looks quite a bit like the “Obesity Is No Laughing Matter” poster.
This state of affairs is good for consulting engineers (SharePoint search, anyone?), mid tier consulting firm pundits, failed webmasters recast as search experts, and various hangers on. The obese enterprise search system is not particularly good for the licensing organization, the employees who are asked to use the system, or for the system administrators who have to shoehorn search into their already stuffed schedule for maintaining databases, accounting systems, enterprise resource planning, and network services.
Search is morbidly obese. No diet is going to work. The fix, based on the research conducted for my new monograph CyberOSINT is that a different approach is needed. Automated collection, analysis, and outputs are the future of information access.
Keyword search is a utility and available in NGIA systems. Unlike the obese keyword search systems, NGIA information access has been engineered to deliver more integrated services to users relying on mobile devices as well as traditional desktop computers.
Obese search is no laughing matter. One cannot make a utility into an NGIA system. However, and NGIA can incorporate search as a utility function. Keep this in mind if you are embracing Microsoft SharePoint-type systems. Net net: traditional enterprise search is splitting its seams, and it is unsightly.
Stephen E Arnold, March 3, 2015
Automated Collection Keynote Preview
February 14, 2015
On February 19, 2015, I will do the keynote at an invitation only intelligence conference in Washington, DC. A preview of my formal remarks is available in an eight minute video at this link. The preview has been edited. I have inserted an example of providing access to content not requiring a Web site.
A comment about the speed with which information and data change and become available. Humans cannot keep up with external and most internal-to-the-organization information.
The preview also includes a simplified schematic of the principal components of a next generation information access system. The diagram is important because it reveals that keyword search is a supporting utility, not the wonder tool many marketers hawk to unsuspecting customers. The supporting research for the talk and the full day conference appears in CyberOSINT, which is now available as an eBook.
Stephen E Arnold, February 14, 2015
Coveo Asserts Record Growth and Improved Relevance
February 12, 2015
Proprietary enterprise search is one reason DARPA has made noise about a new threat center. The idea is that cyber intelligence is a hot issue. Without repeating the information in CyberOSINT, suffice it to say that keyword search is not up to the findability tasks in today’s world. For more on the threat center integration, you may want to review “New Threat Center to Integrate Cyber Intelligence.”
In this context, I read “Coveo Announces Record Growth in 2014.” The company was founded in 2005 in Canada. The the last nine years, according to Crunchbase, the company has ingested $34.7 million from eight investors. The most recent funding round was in December 2012 when the company obtained an additional $18 million. Let’s assume the data are accurate.
In the “record growth” announcement, the company states:
Coveo today announced accelerated growth in 2014 via strong demand for its enterprise search-based applications that help employees upskill as they work, and driven in large part by its continued strategic partnerships with leading organizations such as Salesforce. The year was also marked by the best quarter in the history of the company and the 1,000th enterprise activation of its software, with new customer Sonus Networks.
The “record growth” news story omits an important data point: Financial results with numbers. Coveo is a privately held company and under no obligation to provide any hard numbers. In lieu of metrics, the story provides this interesting item: Enhanced relevance tuning. After nearly nine years in the enterprise market, I had assumed that Coveo had figured out relevance.
Coveo, like its fellow travelers in the keyword search sector Attivio and BA Insight, is recognized in different “expert” advisory firms’ lists of important companies. Also, each of these three keyword search companies are working overtime to generate revenues that enable them to generate Autonomy or Endeca scale revenues. The three keyword search vendors have to differentiate themselves as the US Department of Defense are actively seeks next generation approaches. The sunny days of Autonomy and Endeca have been hit by climate change even as they recline in the shelter of Hewlett Packard and Oracle, their new owners.
My hunch is that if the financials back up the assertions in the “record growth” story, stakeholders will be happy campers. On the other hand, if those funding traditional search systems relying on proprietary code do not see a solid payback, dreary days may be ahead.
For functional information retrieval, many large companies—including the firms developing next generation information access systems—ignore proprietary search solutions. The open source software deliver a lower cost, license fee free commodity function.
Did anyone bring umbrellas? In the hay days of enterprise search, vendors gave away bumbershoots with logos affixed. These may be needed because the search climate has changed with heavier rainfall predicted.
Stephen E Arnold, February 12, 2015
Enterprise Search: Security Remains a Challenge
February 11, 2015
Download an open source enterprise search system or license a proprietary system. Once the system has been installed, the content crawled, the index built, the interfaces set up, and the system optimized the job is complete, right?
Not quite. Retrofitting a keyword search system to meet today’s security requirements is a complex, time consuming, and expensive task. That’s why “experts” who write about search facets, search as a Big Data system, and search as a business intelligence solution ignore security or reassure their customers that it is no big deal. Security is a big deal, and it is becoming a bigger deal with each passing day.
There are a number of security issues to address. The easiest of these is figuring out how to piggyback on access controls provided by a system like Microsoft SharePoint. Other organizations use different enterprise software. As I said, using access controls already in place and diligently monitored by a skilled security administrator is the easy part.
A number of sticky wickets remain; for example:
- Some units of the organization may do work for law enforcement or intelligence entities. There may be different requirements. Some are explicit and promulgated by government agencies. Others may be implicit, acknowledged as standard operating procedure by those with the appropriate clearance and the need to know.
- Specific administrative content must be sequestered. Examples range from information assembled for employee health or compliance requirements for pharma products or controlled substances.
- Legal units may require that content be contained in a managed system and administrative controls put in place to ensure that no changes are introduced into a content set, access is provided to those with specific credential, or kept “off the radar” as the in house legal team tries to figure out how to respond to a discovery activity.
- Some research units may be “black”; that is, no one in the company, including most information technology and security professionals are supposed to know where an activity is taking place, what the information of interest to the research team is, and specialized security steps be enforced. These can include dongles, air gaps, and unknown locations and staff.
An enterprise search system without NGIA security functions is like a 1960s Chevrolet project car. Buy it ready to rebuild for $4,500 and invest $100,000 or more to make it conform to 2015’s standards. Source: http://car.mitula.us/impala-project
How do enterprise search systems deal with these access issues? Are not most modern systems positioned to index “all” content? Is the procedures for each of these four examples part of the enterprise search systems’ administrative tool kit?
Based on the research I conducted for CyberOSINT: Next Generation Information Access and my other studies of enterprise search, the answer is, “No.”
Has Lightning Struck for MaxxCat?
February 10, 2015
Have you ever heard of MaxxCat? It has played around in the back of our RSS feed every now and then when they have accomplished a major breakthrough. The company skipped to the forefront of enterprise search news this morning with one of their products. Before we discuss what wonders MaxxCat plans to do for enterprise search, here is a little more about the company.
MaxxCat was established in 2007 to take advantage of the growing enterprise search solutions market. The company specializes in low cost search and storage as well as integration and managed hosting services. MaxxCat creates well-regarded hardware with an emphasis that their clients should be able to concentrate on more important things than storage. The company’s search appliance hosting page explains a bit more about what MaxxCat offers:
“MaxxCAT can provide complete managed platforms using your MaxxCAT appliances in one or more of our data centers. Our managed platforms allow you to focus on your business, and allow us to focus on getting the maximum performance and uptime from your enterprise search appliances. Nobody can host, tune or manage MaxxCAT appliances as well as the people who invented them.”
Enterprise search appliances without a headache? It is a new and interesting concept that MaxxCat seems to have a handle on it.
Whitney Grace, February 10, 2015
Sponsored by ArnoldIT.com, developer of Augmentext
Attivio: New, New, New after $70 Million and Seven Years
February 7, 2015
With new senior managers and a hunt on for a new director of financial services, Attivio is definitely trying to shake ‘em up. I received some public relations spam about the most recent version of the Attivio system. The approach combines open source software with home brew code, an increasingly popular way to sell licenses, consulting, and services. To top it off, Attivio is an outfit that has the “best company culture” and Dave Schubmehl’s IDC report about Attivio with my name on it available for free. This was a $3,500 item on Amazon earlier this year. Now. Free.
Attivio’s February 3, 2015, news release explains that Attivio is in the enterprise search business. You can read the presser at this link. Not too long ago, Attivio was asserting that it was the solution to some business intelligence woes. I suppose search and business intelligence are related, but “real” intelligence requires more than keyword search and a report capability.
The release explains that Attivio is—I find this fascinating—“reinventing Big Data Search and Dexterity.” Not bad for open source, home brew, and Fast Search & Technology flavoring. Search and dexterity. Definitely a Google Adword keeper.
Attivio’s presser says:
Attivio 4.3 delivers new functionality and improvements that make it dramatically easier to build, deploy, and manage contextually relevant applications that drive revolutionary insight. Companies with structured and unstructured data in disparate silos can now quickly gain immediate access to all information with universal contextual enrichment, all delivered from Attivio’s agile enterprise platform.
I like “revolutionary insight.” Keep in mind that Attivio was formed by former Fast Search & Transfer executives in 2007 and has ingested, according to Crunchbase, $71.1 million in seven years. That works out to $10 million per year to do various technical things and sell products and services to generate money.
More significant to me than money that may be difficult or impossible to repay with a hefty uptick is that in seven years, Attivio has released four versions of its flagship software. With open source providing a chunk of functionality, it strikes me that Attivio may be lagging behind the development curve of some other companies in the content processing sector. But with advisors like Dave Schubmehl and his colleagues, the pace of innovation is likely to be explained as just wonderful. At Cambridge University, one researcher pointed out that work done in 2014 is essentially part of ancient history. There is perhaps a difference between Cambridge in the UK and Cambridge in Massachusetts.
What does Attivo 4.3 offer as “key features”? Here’s what the news release offers:
- ASAP: Attivio Search Application Platform – a simple, intuitive user interface for non-technical users building search-based applications;
- SAIL: Search Analytics Interactive Layer – offers more robust functionality and an enhanced user experience;
- Advanced Entity Extraction: New machine-learning based entity extraction module enriches content with higher accuracy and improved disambiguation, enabling deeper discovery and providing a smart alternative to managing entity dictionaries;
- Simplified Management: Empowers business users to handle documents and manage settings in a code-free environment;
- Composite Documents: Unique ability to search across document fragments optimized to deliver sub-second response times;
- New Designer Tools: Simplifies Attivio management through Visual Workflow and Component Editors, enables all users to design and build custom processing logic in an integrated UI.
There are a couple of important features that are available in other vendors’ systems; for example, geographic functions, automated real-time content collection, automated content analytics, and automated outputs to a range of devices, humans, or other systems.
The notion of ASAP and SAIL are catchy acronyms, but I find them less than satisfying. The entity extraction function is interesting but there is no detail about how it works in languages other than Roman based character sets, how the system deals with variants, and how the system maps one version of an entity to another in content that is either static imagery or video.
I am not sure what a composite document is. If a document contains images and videos, what does the system do with these content objects. If the document is an XML representation, what’s the time penalty to convert content objects to well formed XML? With interfaces becoming the new black, Attivio is closing the gap with the Endeca interface toolkit. Endeca dates from the late 1990s and has blazed a trail through the same marketing jungle that Attivio is now retracing.
For more information about Attivio, visit the company’s Web site at www.attivio.com. The company will be better equipped to explain virtual, enterprise search, big data, and the company’s financial posture than I.
Stephen E Arnold, February 7, 2015
Enterprise Search: Mapless and Lost?
February 5, 2015
One of the content challenges traditional enterprise search trips over is geographic functions. When an employee looks for content, the implicit assumption is that keywords will locate a list of documents in which the information may be located. The user then scans the results list—whether in Google style laundry lists or in the graphic display popularized by Grokker and Kartoo which have gone dark. (Quick aside: Both of these outfits reflect the influence of French information retrieval wizards. I think of these as emulators of Datops “balls” displays.)
A results list displayed by the Grokker system. The idea is that the user explores the circular areas. These contain links to content germane to the user’s keyword query.
The Kartoo interface displays sources connected to related sources. Once again the user clicks and goes through the scan, open, read, extract, and analyze process.
In a broad view, both of these visualizations are maps of information. Do today’s users want these type of hard to understand maps?
In CyberOSINT I explore the role of “maps” or more properly geographic intelligence (geoint), geo-tagging, and geographic outputs) from automatically collected and analyzed data.
The idea is that a next generation information access system recognizes geographic data and displays those data in maps. Think in terms of overlays on the eye popping maps available from commercial imagery vendors.
What do these outputs look like? Let me draw one example from the discussion in CyberOSINT about this important approach to enterprise related information. Keep in mind that an NGIA can process any information made available to the systems; for example, enterprise accounting systems or databased content along with text documents.
In response to either a task, a routine update when new information becomes available, or a request generated by a user with a mobile device, the output looks like this on a laptop:
Source: ClearTerra, 2014
The approach that ClearTerra offers allows a person looking for information about customers, prospects, or other types of data which carries geo-codes appears on a dynamic map. The map can be displayed on the user’s device; for example a mobile phone. In some implementations, the map is a dynamic PDF file which displays locations of items of interest as the item of interest moves. Think of a person driving a delivery truck or an RFID tagged package.
Enterprise Search: NGIA Vendors Offer Alternative to the Search Box
February 4, 2015
I have been following the “blast from the past” articles that appear on certain content management oriented blogs and news services. I find the articles about federated search, governance, and knowledge related topics oddly out of step with the more forward looking developments in information access.
I am puzzled because the keyword search sector has been stuck in a rut for many years. The innovations touted in the consulting-jargon of some failed webmasters, terminated in house specialists, and frustrated academics are old, hoary with age, and deeply problematic.
There are some facts that cheerleaders for the solutions of the 1970s, 1980s, and 1990s choose to overlook:
- Enterprise search typically means a subset of content required by an employee to perform work in today’s fluid and mobile work environment. The mix of employees and part timers translates to serious access control work. Enterprise search vendors “support” an organization’s security systems in the manner of a consulting physician to heart surgery. Inputs but no responsibility are the characteristics.
- The costs of configuring, testing, and optimizing an old school system are usually higher than the vendor suggests. When the actual costs collide with the budget costs, the customer gets frisky. Fast Search & Transfer’s infamous revenue challenges came about in part because customers refused to pay when the system was not running and working as the marketers suggested it would.
- Employees cannot locate needed information and don’t like the interfaces. The information is often “in” the system but not in the indexes. And if in the indexes, the users cannot figure out which combination of keywords unlocks what’s needed. The response is, “Who has time for this?” When a satisfaction measure is required somewhere between 55 and 75 percent of the search system’s users don’t like it very much.
Obviously organizations are looking for alternatives. These range from using open source solutions which are good enough. Other organizations put up with Windows’ search tools, which are also good enough. More important software systems like an enterprise resource planning or accounting system come with basis search functions. Again: These are good enough.
The focus of information access has shifted from indexing a limited corpus of content using a traditional solution to a more comprehensive, automated approach. No software is without its weaknesses. But compared to keyword search, there are vendors pointing customers toward a different approach.
Who are these vendors? In this short write up, I want to highlight the type of information about next generation information access vendors in my new monograph, CyberOSINT: Next Generation Information Access.
I want to highlight one vendor profiled in the monograph and mention three other vendors in the NGIA space which are not included in the first edition of the report but for whom I have reports available for a fee.
I want to direct your attention to Knowlesys, an NGIA vendor operating in Hong Kong and the Nanshan District, Shenzhen. On the surface, the company processes Web content. The firm also provides a free download of a scraping software, which is beginning to show its age.
Dig a bit deeper, and Knowlesys provides a range of custom services. These include deploying, maintaining, and operating next generation information access systems for clients. The company’s system can process and make available automatically content from internal, external, and third party providers. Access is available via standard desktop computers and mobile devices:
Source: Knowlesys, 2014.
The system handles both structured and unstructured content in English and a number of other languages.
The company does not reveal its clients and the firm routinely ignores communications sent via the online “contact us” mail form and faxed letters.
How sophisticated in the Knowlesys system? Compared to the other 20 systems analyzed for the CyberOSINT monograph, my assessment is that the company’s technology is on a part with that of other vendors offering NGIA systems. The plus of the Knowlesys system, if one can obtain a license, is that it will handle Chinese and other ideographic languages as well as the Romance languages. The downside is that for some applications, the company’s location in China may be a consideration.
A Glimpse of Enterprise Search in 24 Months
February 3, 2015
The enterprise search sector faces one of its most critical periods in the next 24 months. The open source “commodity” search threat has moved into the mainstream. The value added indexing boomlet has helped make suggestions, point-and-click queries, and facets standard features. Prices for traditional search systems are all over the place. Proprietary technology vendors offer useful solutions for a few hundred dollars. The gap between the huge license fees of the early 2000s is, in theory, closed by the vendors’ consulting and engineering services revenue.
But the grim reality is that most systems today include some type of information access tool. Whether it is Google’s advertiser-energized model or Microsoft’s attempts to provide information to a Bing user before he or she knows she wants that information suggest that the human query is slowly being eased out of the system.
I would suggest you read “Replacing Middle Management with APIs.” The article focuses on examples that at first glance seem far removed from locating the name and address of a customer. That view would be one dimensional. The article suggests that another significant wave of disintermediation will take place. Instead of marginalizing the research librarian, next generation software will have an impact on middle management.
Humans, instead of performing decision making functions, become “cogs in a giant automated dispatching machine.” The example applies to an Uber type operation but it can be easily seen as a concept that will apply to many intermediating tasks.
Here’s the passage I highlighted in yellow this morning:
What’s bizarre here is that these lines of code directly control real humans. The Uber API dispatches a human to drive from point A to point B. And the 99designs Tasks API dispatches a human to convert an image into a vector logo (black, white and color). Humans are on the verge of becoming literal cogs in a machine, completely anonymized behind an API. And the companies that control those APIs have strong incentives to drive down the cost of executing those API methods.
What does this have to do with enterprise search?
I see several possible points of intersection:
First, software can eliminate the much reviled guessing game of finding the keywords that unlock the index. The next generation search system presents information to the user. The user becomes an Uber driver, executing the tasks assigned by the machine. Need a name and address? The next generation system identifies the need, fetches the information, and injects it into a work flow that still requires a human to perform a function.
Second, the traditional information retrieval vendors will have to find the time, money, and expertise to overhaul their keyword systems. Cosmetics just will not be enough to deal with the threat of what the author calls application programming interfaces. The disintermediation will not be limited to middle managers. The next wave of work casualties will be companies that sell old school information access systems. The disintermediation of companies anchored in the past will have significant influence over the success of search vendors marketing aggressively 24×7.
Third, the user in the Gen X, Millennial, and Gen Y demographics have been conditioned to rely on smart software. Need a pizza? The Apple and Google mapping services deliver in a manner of speaking. Keywords are just not ideal on a mobile device.
The article states:
And I suspect these software layers will only get thicker. Entrepreneurial software developers will find ways to tie these APIs together, delivering products that combine several “human” APIs. Someone could use Mechanical Turk’s API to automate sales prospect research, plug that data into 99designs Tasks’ API to prepare customized infographics for the prospect sent via email. Or someone could use Redfin’s API to automatically purchase houses, and send a Zirtual [sic] assistant instructions via email on how to project-manage a renovation, flipping the house completely programmatically. These “real-world APIs” allow complex programs (or an AI in the spooky storyline here), to affect and control things in the real-world. It does seem apropos that we invest in AI safety now. As the software layer gets thicker, the gap between Below the API jobs and Above the API jobs widens. And economic incentives will push Above the API engineers to automate the jobs Below the API: self-driving cars and drone delivery are certainly on the way.
My view is that this API shift is well underway. I document a number of systems that automatically collect, analyze, and output actionable information to humans and to other systems. For more information about next generation information access solutions, check out CyberOSINT, my most recent monograph about information access.
For enterprise search vendors dependent on keywords and hyperbolic marketing, APIs may be one of the most serious challenges the sector has yet faced.
Stephen E Arnold, February 3, 2015