Mindbreeze: A View from the Top
November 9, 2011
Fabasoft Mindbreeze managing director, Daniel Fallman, gives his insight to KM World in, “Mindbreeze, Managing Director, Daniel Fallmann: View from the Top.”
Using open standards, Mindbreeze offers high-performance enterprise search and digital cognition for all kinds of enterprises. We have developed context-enriching indexing services, which are available without time-consuming set up procedures. Information access without ironclad security is not a solution. Fabasoft Mindbreeze ensures that only authorized users can access the information. Our product was designed from the beginning to be installed quickly in minutes, thus obviating expensive installation processes. The Fabasoft Mindbreeze Appliance can be up and running for your users in just a matter of hours.
Fallmann, the Fabasoft Mindbreeze founder, talks about his Austrian start-up on this brief video. He is able to succinctly explain how the Mindbreeze solution assists users with internal and external search.
Saving the user from lengthy installation and clunky customization, Mindbreeze seamlessly integrates onto an existing platform. Semantic recognition enhances search results, providing not only quick but relevant search results. Third-party application data is available to mobile devices through Fabasoft Mindbreeze Mobile. Standard installations such as Microsoft SharePoint can lack versatility and customization becomes lengthy and difficult.
Evaluate your enterprise needs and see if Fabasoft Mindbreeze and its highly efficient solutions might be the right choice for your organization. In Fallmann’s words, “Make informed decisions.”
Emily Rae Aldridge, November 9, 2011
Sponsored by Pandia.com
Business Process Management: Bit Player or Buzz Word?
November 7, 2011
I spoke with one of the goslings who produces content for our different information services. We were reviewing a draft of a write up, and I reacted negatively to the source document and to the wild and crazy notions that find their way into the discussions about “problems” and “challenges” in information technology.
In enterprise search and content management, flag waving is more important than solving customers’ problems. Economic pressure seems to exponentiate the marketing clutter. Are companies with resources “too big to flail””? Nope.
Here’s the draft, and I have put in bold face the parts that caught my attention and push back:
As the amount of data within a business or industry grows the question of what to do with it arises. The article, “Business Process Management and Mastering Data in the Enterprise“, on Capgemini’s Web site explains how Business Process Management (BPM) is not the ideal means for managing data.
According the article as more and more operations are used to store data the process of synchronizing the data becomes increasingly difficult.
As for using BPM to do the job, the article explains,
While BPM tools have the infrastructure to do hold a data model and integrate to multiple core systems, the process of mastering the data can become complex and, as the program expands across ever more systems, the challenges can become unmanageable. In my view, BPMS solutions with a few exceptions are not the right place to be managing core data[i]. At the enterprise level MDM solutions are for more elegant solutions designed specifically for this purpose.
The answer to this ever-growing problem was happened upon by combining knowledge from both a data perspective and a process perspective. The article suggests that a Target Operating Model (TOM) would act as a rudder for the projects aimed at synchronizing data. After that was in place a common information model be created with enterprise definitions of the data entities which then would be populated by general attributes fed by a single process project.
While this is just one man’s answer to the problem of data, it is a start. Regardless of how businesses approach the problem it remains constant–process management alone is not efficient enough to meet the demands of data management.
Here’s my concern. First, I think there are a number of concepts, shibboleths, and smoke screens flying, floating, and flapping. The conceptual clutter is crazy. The “real” journalists dutifully cover these “signals”. My hunch is that most of the folks who like videos gobble these pronouncements like Centrum multivitamins. The idea is that one doze with lots of “stuff” will prevent information technology problems from wrecking havoc on an organization.
Three observations:
First, I think that in the noise, quite interesting and very useful approaches to enterprise information management can get lost. Two good examples. Polyspot in France and Digital Reasoning in the U.S. Both companies have approaches which solve some tough problems. Polyspot offers and infrastructure, search, and apps approach. Digital Reasoning delivers next-generation numerical recipes, what the company calls entity based analytics. Baloney like Target Operating Models do not embrace these quite useful technologies.
Second, the sensitivity of indexes and blogs to public relations spam is increasing. The perception that indexing systems are “objective” is fascinating, just incorrect. What happens then is that a well heeled firm can output a sequence of spam news releases and then sit back and watch the “real” journalists pick up the arguments and ideas. I wrote about one example of this in “A Coming Dust Up between Oracle and MarkLogic?”
Third, I am considering a longer essai about the problem of confusing Barbara, Desdemona’s mother’s maid, with Othello. Examples include confusing technical methods or standards with magic potions; for instance, taxonomies as a “fix” for lousy findability and search, semantics as a work around for poorly written information, metatagging as a solution to context free messages, etc. What’s happening is that a supporting character, probably added by the compilers of Shakespeare’s First Folio edition is made into the protagonist. Since many recent college graduates don’t know much about Othello, talking about Barbara as the possible name of the man who played the role in the 17th century is a waste of time. The response I get when I mention “Barbara” when discussing the play is, “Who?” This problem is surfacing in discussions of technology. XML, for example, is not a rabbit from a hat. XML is a way to describe the rabbit-hat-magician content and slice and dice the rabbit-hat-magician without too many sliding panels and dim lights.
What is the relation of this management and method malarkey? Sales, gentle reader, sales. Hyperbole, spam, and jargon are Teflon to get a deal.
Stephen E Arnold, November 7, 2011
Sponsored by Pandia.com
Spotlight: Mindbreeze on the SharePoint Stage
November 1, 2011
A new feature, mentioned in the Beyond Search story “Software and Smart Content.” We will be taking a close look at some vendors. Some will be off the board; for example, systems which have been acquired and, for all practical purposes, their feature set frozen. I have enlisted Abe Lederman, one of the founders of Verity (now a unit of Autonomy and Hewlett Packard) and now the chief executive of Deep Web Technologies.
Our first company under the spotlight is Mindbreeze, which is a unit of Fabasoft, which is one of the leading, if not the leading, Microsoft partners in Austria. Based in Linz, Mindbreeze offers are remarkably robust search and content processing solution.
The company is a leader in adding functionality to basic search, finding, and indexing tasks in organizations worldwide. In August 2011, CMSWire’s “A Strategic Look at SharePoint: Economics, Information & People” made this point:
SharePoint continues to grow in organizations of all sizes, from document collaboration and intranet publishing, to an increasing focus on business process workflows, internet and extranets. Today, many organizations are now in flight with their 2010 upgrades, replacing other portals and ECM applications, and even embracing social computing all on SharePoint.
The Mindbreeze system, according to Daniel Fallmann, the individual who was the mastermind behind the Mindbreeze technology, “snaps in” to Microsoft SharePoint and addresses many of the challenges that a SharePoint administrator encounters when trying to respond to diverse user needs in search and retrieval. In as little as a few hours, maybe a day, a company struggling to locate information in a SharePoint installation can be finding documents using a friendly, graphical interface.
My recollection of Mindbreeze is that it was a “multi stage” service oriented architecture. For me, this means that system administrators can configure the system from a central administrative console and work through the graphical set up screens to handle content crawling (acquisition), indexing, and querying.
The system supports mobile search and can support “apps,” which are quickly becoming the preferred method of accessing certain types of reports. The idea is that a Mindbreeze user from sales can access the content needed prior to a sales call from a mobile device.
According to Andreas Fritschi, a government official at Canton Thurgau:
Fabasoft Mindbreeze Enterprise makes our everyday work much easier. This is also an advantage for our citizens. They receive their information much faster. This software can be used by people in all sectors of public administration, from handling enquiries to people in management.
Why is the tight integration with Microsoft SharePoint important? There are three reasons that our work in search and content processing highlights.
First, there are more than 100 million SharePoint installations and most of the Fortune 1000 are using SharePoint to provide employees with content management, collaboration, and specialized search-centric functions such as locating a person with a particular area of knowledge in one’s organization. With Mindbreeze, these functions become easier to use and require no custom coding to implement within a SharePoint environment.
Second, users are demanding answers, not laundry lists. The Mindbreeze approach allows a licensee to set up the system to deliver exactly with a group of users or a single user requires. The tailoring occurs within the Fabasoft and Mindbreeze “composite content environment.” Fabasoft and Mindbreeze deliver easy-to-use configuration tools. Mash ups are a few clicks away.
Third, Mindbreeze makes use of the Fabasoft work flow technology. Information can be moved from Point A to Point B without requiring changing users’ work behaviors. As a result, user satisfaction rises.
You can learn more about Mindbreeze at www.mindbreeze.com. Information about Fabasoft and its technology are at www.fabasoft.com.
Stephen E Arnold, November 1, 2011
Sponsored by Pandia.com
The Perils of Searching in a Hurry
November 1, 2011
I read the Computerworld story “How Google Was Tripped Up by a Bad Search.” I assume that it is pretty close to events as the “real” reporter summarized them.
Let me say that I am not too concerned about the fact that Google was caught in a search trip wire. I am concerned with a larger issue, and one that is quite important as search becomes indexing, facets, knowledge, prediction, and apps. The case reported by Computerworld applies to much of “finding” information today.
Legal matters are rich with examples of big outfits fumbling a procedure or making an error under the pressure of litigation or even contemplating litigation. The Computerworld story describes an email which may be interpreted as having a bright LED to shine on the Java in Android matter. I found this sentence fascinating:
Lindholm’s computer saved nine drafts of the email while he was writing it, Google explained in court filings. Only to the last draft did he add the words “Attorney Work Product,” and only on the version that was sent did he fill out the “to” field, with the names of Rubin and Google in-house attorney Ben Lee.
Ah, the issue of versioning. How many content management experts have ignored this issue in the enterprise. When search systems index, does one want every version indexed or just the “real” version? Oh, what is the “real” version. A person has to investigate and then make a decision. Software and azure chip consultants, governance and content management experts, and busy MBAs and contractors are often too busy to perform this work. Grunt work, I believe, it may be described by some.
What I am considering is the confluence of people who assume “search” works, the lack of time Outlook and iCalandar “priority one” people face, and the reluctance to sit down and work through documents in a thorough manner. This is part of the “problem” with search and software is not going to resolve the problem quickly, if ever.
Source: http://www.clipartguide.com/_pages/0511-1010-0617-4419.html
What struck me is how people in a hurry, assumptions about search, and legal procedures underscore a number of problems in findability. But the key paragraph in the write up, in my opinion, was:
It’s unclear exactly how the email drafts slipped through the net, and Google and two of its law firms did not reply to requests for comment. In a court filing, Google’s lawyers said their “electronic scanning tools” — which basically perform a search function — failed to catch the documents before they were produced, because the “to” field was blank and Lindholm hadn’t yet added the words “attorney work product.” But documents produced for opposing counsel should normally be reviewed by a person before they go out the door, said Caitlin Murphy, a senior product manager at AccessData, which makes e-discovery tools, and a former attorney herself. It’s a time-consuming process, she said, but it was “a big mistake” for the email to have slipped through.
What did I think when I read this?
First, all the baloney—yep, the right word, folks–about search, facets, metadata, indexing, clustering, governance and analytics underscore something I have been saying for a long, long time. Search is not working as lots of people assume it does. You can substitute “eDiscovery,” “text mining,” or “metatagging” for search. The statement holds water for each.
The algorithms will work within limits but the problem with search has to do with language. Software, no matter how sophisticated, gets fooled with missing data elements, versions, and words themselves. It is high time that the people yapping about how wonderful automated systems are stop and ask themselves this question, “Do I want to go to jail because I assumed a search or content processing system was working?” I know my answer.
Second, in the Computerworld write up, the user’s system dutifully saved multiple versions of the document. Okay, SharePoint lovers, here’s a question for you? Does your search system make clear which antecedent version is which and which document is the best and final version? We know from the Computerworld write up that the Google system did not make this distinction. My point is that the nifty sounding yap about how “findable” a document is remains mostly baloney. Azure chip consultants and investment banks can convince themselves and the widows from whom money is derived that a new search system works wonderfully. I think the version issue makes clear that most search and content processing systems still have problems with multiple instances of documents. Don’t believe me. Go look for the drafts of your last PowerPoint. Now to whom did you email a copy? From whom did you get inputs? Which set of slides were the ones on the laptop you used for the briefing? What the “correct” version of the presentation? If you cannot answer the question, how will software?
Software and Smart Content
October 30, 2011
I was moving data from Point A to Point B yesterday, filtering junk that has marginal value. I scanned a news story from a Web site which covers information technology with a Canadian perspective. The story was “IBM, Yahoo turn to Montreal’s NStein to Test Search Tool.” In 2006, IBM was a pace-setter in search development cost control The company was relying on the open source community’s Lucene technology, not the wild and crazy innovations from Almaden and other IBM research facilities. Web Fountain and jazzy XML methods were promising ways to make dumb content smart, but IBM needed a way to deliver the bread-and-butter findability at a sustainable, acceptable cost. The result was OmniFind. I had made a note to myself that we tested the Yahoo OmniFind edition when it became available and noted:
Installation was fine on the IBM server. Indexing seemed sluggish. Basic search functions generated a laundry list of documents. Ho hum.
Maybe this comment was unfair, but five years ago, there were arguably better search and retrieval systems. I was in the midst of the third edition of the Enterprise Search Report, long since batardized by the azure chip crowd and the “real” experts. But we had a test corpus, lots of hardware, and an interest is seeing for ourselves how tough it was to get an enterprise search system up and running. Our impression was that most people would slam in the system, skip the fancy stuff, and move on to more interesting things such as playing Foosball.
Thanks to Adobe for making software that creates a need for Photoshop training. Source: http://www.practical-photoshop.com/PS2/pages/assign.html
Smart, Intelligent… Information?
In this blast from the past article, NStein’s product in 2006 was “an intelligent content management product used by media companies such as Time Magazine and the BBC, and a text mining tool called NServer.” The idea was to use search plus a value adding system to improve the enterprise user’s search experience.
Now the use of the word “intelligent” to describe a content processing system, reaching back through the decades to computer aided logistics and forward to the Extensible Markup Language methods.
The idea of “intelligent” is a pregnant one, with a gestation period measured in decades.
Flash forward to the present. IBM markets OmniFind and a range of products which provide basic search as a utility function. NStein is a unit of OpenText, and it has been absorbed into a conglomerate with a number of search systems. The investment needed to update, enhance, and extend BASIS, BRS Search, NStein, and the other systems OpenText “sells” is a big number. “Intelligent content” has not been an OpenText buzzword for a couple of years.
The torch has been passed to conference organizers and a company called Thoora, which “combines aggregation, curation, and search for personalized news streams.” You can get some basic information in the TechCrunch article “Thoora Releases Intelligent Content Discovery Engine to the Public.”
In two separate teleconference calls last week (October 24 to 28, 2011), “intelligent content” came up. In one call, the firm was explaining that traditional indexing system missed important nuances. By processing a wide range of content and querying a proprietary index of the content, the information derived from the content would be more findable. When a document was accessed, the content was “intelligent”; that is, the document contained value added indexing.
The second call focused on the importance of analytics. The content processing system would ingest a wide range of unstructured data, identify items of interest such as the name of a company, and use advanced analytics to make relationships and other important facets of the content visible. The documents were decomposed into components, and each of the components was “smart”. Again the idea is that the fact or component of information was related to the original document and to the processed corpus of information.
No problem.
Shift in Search
We are witnessing another one of those abrupt shifts in enterprise search. Here’s my working hypothesis. (If you harbor a life long love of marketing baloney, quit reading because I am gunning for this pressure point.)
Let’s face it. Enterprise search is just not revving the engines of the people in information technology or the chief financial officer’s office. Money pumped into search typically generates a large number of user complaints, security issues, and cost spikes. As content volume goes up, so do costs. The enterprise is not Google-land, and money is limited. The content is quite complex, and who wants to try and crack 1990s technology against the nut of 21st century data flows. Not I. So something hotter is needed.
Second, the hottest trends in “search” have nothing to do with search whatsoever. Examples range from conflating the interface with precision and recall. Sorry. Does not compute for me. The other angle is “mobile.” Sure, search will work when everything is monitored and “smart” software provides a statistically appropriate method suggests will work “most” of the time. There is also the baloney about apps, which is little more than the gameification of what in many cases might better be served with a system that makes the user confront actual data, not an abstraction of data. What this means is that people are looking for a way to provide information access without having to grunt around in the messy innards of editorial policies, precision, recall, and other tasks that are intellectually rigorous in a way that Angry Birds interfaces for business intelligence are not.
Third, companies engaged in content access are struggling for revenue. Sure, the best of the search vendors have been purchased by larger technology companies. These acquisitions guarantee three things.
- The Wild West spirit of the innovative content processing vendors is essentially going to be stamped out. Creativity will be herded into the corporate killing pens, and the “team” will be rendered as meat products for a technology McDonald’s
- The cash sink holes that search vendors research programs were will be filled with procedure manuals and forms. There is no money for blue sky problem solving to crack the tough problems in information retrieval at a Fortune 1000 company. Cash can be better spent on things that may actually generate a return. After all, if the search vendors were so smart, why did most companies hit revenue ceilings and have to turn to acquisitions to generate growth? For firms unable to grow revenues, some just fiddled the books. Others had to get injections of cash like a senior citizen in the last six months of life in a care facility. So acquired companies are not likely to be hot beds of innovation.
- The pricing mechanisms which search vendors have so cleverly hidden, obfuscated, and complexified will be tossed out the window. When a technology is a utility, then giant corporations will incorporate some of the technology in other products to make a sale.
What we have, therefore, is a search marketplace where the most visible and arguably successful companies have been acquired. The companies still in the marketplace now have to market like the Dickens and figure out how to cope with free open source solutions and giant acquirers who will just give away search technology.
Access Innovations Awarded Patent for MAIChem
October 28, 2011
Bravo to our friends at Access Innovations for receiving a U.S. patent (the company’s 19th technology patent) for MAIChem, a software-based method for searching chemical names in documents.
The company, founded in 1978, focuses on Internet technology applications and content management and enhancement. MAIChem is a tool that will be highly useful for researchers and information managers in the chemical and pharmacy data industries. A press release, “Access Innovations Receives U.S. Patent for Unique MAIChem™ Software Search Method: Software Provides Fast, In-Depth, Broad and Consistently Accurate Searches of Chemical and Pharmaceutical Industry Data,” shares details about the tool:
Finding these names in documents is challenging due to the unlimited number of potential compounds and the variety of ways a compound can be named. MAIChem solves the problem by comparing the text to regular expressions that match typical chemical morphemes, such as “hydro” or “amine,” to see if they occur in words.”’explained Marjorie M.K. Hlava, president of Access Innovations. After its initial analysis, MAIChem’s software differentiates between nonchemical words that use the morphemes and actual chemical names.
MAIChem could potentially help in numerous fields and tasks: content discovery, analysis, machine-aided indexing, and faster information retrieval. The award of this patent shows Access Innovations is bringing something unique to the board in content management. Chemistry professionals should be swooning; Access Innovations is taking it to the next level. Congratulations from the team at Beyond Search.
For more information about Access Innovations’ MAIChem, visit http://www.dataharmony.com/products/maichem.html Now maybe the faux taxonomy experts will realize there is more to ANSI standard vocabularies than a slick marketing program and a reference to military training. We can only hope.
Andrea Hayden, October 28, 2011
PolySpot Wins over OSEO with Enterprise Search
October 28, 2011
Paris-based PolySpot’s reliability in conjunction with their innovative technologies paid off. In the news release, “OSEO Opts for a new Search Engine with PolySpot” we got to hear about many of the specifics that made PolySpot stand out amongst the competition.
First, lets look at the issues that prompted OSEO to make the switch. OSEO had a Java-based directory in addition to a search engine supplied with its open source content management system.
OSEO’s former service was characterized by the following:
Indexing of data was restricted to the intranet and the search engine picked up too much ‘noise’. The users, unable to locate required information quickly, were no longer satisfied with the existing search engine which offered basic functionality.
Frédéric Vincent, Information System and Quality Assurance Manager champions their decision to use PolySpot Enterprise Search.
The functionalities that comprise an intuitive user interface make PolySpot’s Search stand out: users can now customize their internal search tool, see added-value tags related to their queries in tag cloud, and access search without quitting any other applications.
We think it may be a prudent step to check out PolySpot’s solutions at www.polyspot.com.
Megan Feil, October 28, 2011
Sponsored by Pandia.com
Facebook and Semantic Search
October 27, 2011
Stories about Facebook search surface then disappear. For years we have wondered why Twitter resists indexing the urls posted by Facebook members. Our view is that for the Facebook crowd, this curated subset of Web pages would be a useful reference resource. With Facebook metadata, the collection could become quite interesting in a number of dimensions.
Not yet, but the ongoing social media war between Web giants Facebook and Google doesn’t seem to be stopping at social media.
Facebook was last spring beavering away to create a semantic search engine using meta data, based on the company’s Open Graph system and by using collected data on every user. Few companies have the ability to build a semantic search engine, but with Facebook’s scale of users (over 400 million users), the company has the ability to create something huge. We learn more on AllFacebook’s article, “Facebook Seeks To Build the Semantic Search Engine”:
There are a number of standards that have been created in the past as some developers have pointed out, microformats being the most widely accepted version, however the reduction of friction for implementation means that Facebook has a better shot at more quickly collecting the data. The race is on for building the semantic web and now that developers and website owners have the tools to implement this immediately.
The source document appeared in April 2011 and here we are in the run up to Turkey Day and no semantic search system. Now we are wondering if Facebook has concluded that search is yesterday’s business or is the company struggling with implementation of semantic technology in a social space?
We will keep watching.
Andrea Hayden, October 27, 2011
Sponsored by Pandia.com
Microsoft on Semantic Search
October 25, 2011
We were interested to learn that semantic search is alive and kicking. A helping hand may be needed, but semantic search is not on life support.
Microsoft is making baby steps toward more user-friendly services, particularly in the realm of semantic search. MSDN Library offers information and assistance for developers using Microsoft products and services. I found one reference article while browsing the site that I found particularly useful.
“Semantic Search (SQL Server)” is an write up which is still in its “preview” stage, so it is short and has a few empty links, but it provides quite a bit of insight and examples that are very useful for someone attempting to integrate Statistical Semantic Search in SQL Server databases. This process, we learn, extracts and indexes statistically relevant key phrases and uses these phrases to identify and index documents that are similar or related. A user queries these semantic indexes by using Transact-SQL rowset functions.
The document tells us:
Semantic search builds upon the existing full-text search feature in SQL Server, but enables new scenarios that extend beyond keyword searches. While full-text search lets you query the words in a document, semantic search lets you query the meaning of the document. Solutions that are now possible include automatic tag extraction, related content discovery, and hierarchical navigation across similar content. For example, you can query the index of key phrases to build the taxonomy for an organization, or for a corpus of documents.
The article goes on to explain various features of semantic search, such as finding key phrases in a document, finding similar or related documents, or even finding the key phrases that make documents similar or related. Add in storage, installation, indexing, and we have a good move in “how-to” for Microsoft. With Powerset, Fast Search, and Cognition Technologies, Microsoft should be one of the aces in semantic search.
Andrea Hayden, October 25, 2011
Sponsored by Pandia.com
Google and the Perils of Posting
October 21, 2011
I don’t want to make a big deal out of an simple human mistake from a button click. I just had eye surgery, and it is a miracle that I can [a] find my keyboard and [b] make any function on my computers work.
However, I did notice this item this morning and wanted to snag it before it magically disappeared due to mysterious computer gremlins. The item in question is “Last Week I Accidentally Posted”, via Google Plus at this url. I apologize for the notation style, but Google Plus posts come with the weird use of the “+” sign which is a killer when running queries on some search systems. Also, there is no title, which means this is more of a James Joyce type of writing than a standard news article or even a blog post from the addled goose in Harrod’s Creek.
To get some context you can read my original commentary in “Google Amazon Dust Bunnies.” My focus in that write up is squarely on the battle between Google and Amazon, which I think is more serious confrontation that the unemployed English teachers, aging hippies turned consultant, and the failed yet smarmy Web masters who have reinvented themselves as “search experts” think.
Believe me, Google versus Amazon is going to be interesting. If my research is on the money, the problems between Google and Amazon will escalate to and may surpass the tension that exists between Google and Oracle, Google and Apple, and Google and Viacom. (Well, Viacom may be different because that is a personal and business spat, not just big companies trying to grab the entire supply of apple pies in the cafeteria.)
In the Dust Bunnies write up, I focused on the management context of the information in the original post and the subsequent news stories. In this write up, I want to comment on four aspects of this second post about why Google and Amazon are both so good, so important, and so often misunderstood. If you want me to talk about the writer of these Google Plus essays, stop reading. The individual’s name which appears on the source documents is irrelevant.
1. Altering or Idealizing What Really Happened
I had a college professor, Dr. Philip Crane who told us in history class in 1963, “When Stalin wanted to change history, he ordered history textbooks to be rewritten.” I don’t know if the anecdote is true or not. Dr. Crane went on to become a US congressman, and you know how reliable those folks’ public statements are. What we have in the original document and this apologia is a rewriting of history. I find this interesting because the author could use other methods to make the content disappear. My question, “Why not?” And, “Why revisit what was a pretty sophomoric tirade involving a couple of big companies?”
2, Suppressing Content with New Content
One of the quirks of modern indexing systems such as Baidu, Jike, and Yandex is that once content is in the index, it can persist. As more content on a particular topic accretes “around” an anchor document, the document becomes more findable. What I find interesting is that despite the removal of the original post the secondary post continues to “hook” to discussions of that original post. In fact, the snippet I quoted in “Dust Bunnies” comes from a secondary source. I have noted and adapted to “good stuff” disappearing as a primary document. The only evidence of a document’s existence are secondary references. As these expand, then the original item becomes more visible and more difficult to suppress. In short, the author of the apologia is ensuring the findability of the gaffe. Fascinating to me.
3. Amazon: A Problem for Google