Attensity’s Newest Partner
February 17, 2009
Attensity is out leveraging its text analytics software. The company just partnered up with enherent whose tagline is “Gather, manage and transform your data and content into timely, secure, actionable intelligence.” Now enherent will be using Attensity’s software to perform the analytics, manage risk, and review customer input on a larger scale. A press release said the idea is to take advantage of new “ideas in a time when business success depends on innovation.” While enherent gets the Attensity’s First Person Intelligence Platform with its vocabularies, analytics and subject matter expertise, Attensity gets exposure to a long list of customers and resources, more expertise in text analytics to advance its skill sets and a higher profile in the industry. Attensity looks like it’s making smart decisions for the future. Keep an eye on them.
Jessica W. Bratcher, February 17, 2009
What Is Vint Cerf Saying
February 16, 2009
Lidija Davis’s “Vint Cerf: Despite Its Age, the Internet Is Still Filled with Problems” does a good job of providing an overview of Vint Cerf’s view of the Internet. You can read the article here. Mr. Davis provides a snapshot of the issues that must be addressed if she captured the Google evangelist’s thoughts accurately:
According to Cerf, and many others, inter-cloud communication issues such as formats and protocols, as well as inter or intra-cloud security need to be addressed urgently.
I found the comments about bit rot interesting and highly suggestive. She quite rightly points out that her summary presents only a small segment of the talk.
When I read her pretty good write up, I had one thought: “Google wants to become the Internet.” If the company pulls off this grand slam play, then the issues identified by Evangelist Cerf can be addressed in a more forthright manner. My reading of the Guha patent documents, filed in February 2007, reveals some of the steps Google’s programmable search engine includes to tackle the problems. Mr. Cerf identified and Ms. Davis reported. I find the GoogleNet an interesting idea to ponder. With some content pulled from Google caches and the Google CDN (content delivery network), Google may be the appropriate intermediary and enforcer in this increasingly unstable “space”.
Stephen Arnold, February 16, 2009
Another Google Glitch
February 16, 2009
More technical woes befuddle the wizards at Google. According to SERoundTable’s article “Google AdSense and AdWords Reportings Takes a Weekend Break” [sic] here, these systems analytic reports did not work. I wonder if Googzilla took a rest on Valentine’s Day? The story provides a link to Google’s “good news” explanation of the problem in AdWords help. SERoundTable.com provides links to the various “discussions” and “conversations” about this issue. This addled goose sees these as “complaints” and “snarls”, but that’s the goose’s refusal to use the lingo of the entitlement generation.
Call it what you will. The GOOG has been showing technical missteps with what the goose sees as increasing frequency. The Google plumbing reached state of the art in the 1998 to 2004 period. Now the question is can the plumbing and the layers of software piled on top of Chubby and the rest of the gang handle the challenges of Facebook.com and Twitter.com? Google knows what to do to counter these real time search challengers. The question is, “Will its software system and services allow Googzilla to deal with these threats in an increasingly important search sector?” I am on the fence because of these technical walkabouts in mission critical systems like AdSense and AdWords. Who would have thought that the GOOG couldn’t keep its money machine up and running on Cupid’s day? Is there a lack of technical love in Mountain View due to other interests?
Stephen Arnold, February 16, 2009
Simplexo in the Middle East
February 16, 2009
There’s another open source enterprise search company making connections in the industry. Simplexo provides “single click” data searches through both structured (databases, payroll systems) and unstructured (e-mail, images, text files etc.) material in real-time under customizeable security. Beyond Search wrote about Simplexo back in September. See http://arnoldit.com/wordpress/2008/09/11/simplexo-another-open-source-enterprise-search-platform/. Simplexo is privately owned and has a background in electronic document management and retrieval. It was a startup then, and six months later the company has announced a strategic partnership with enterprise e-business solutions provider Duroob Information Technology (DIT) (http://duroob.esolutions-road.com/index.php?q=taxonomy/term/15) in the Middle East, which has connections to big players Oracle (http://www.oracle.com/index.html) and Computer Aassociates (http://www.ca.com/). Simplexo is a company to keep an eye to make sure its backing up its boasts with real hard data delivery.
Jessica W. Bratcher, February 16, 2009
Guidance That Leads Astray
February 16, 2009
I find this story somewhat difficult to believe. In fact, if it were not distributed by the estimable Yahoo here, I would have ignored the write up. The core of the story is that a firm providing eDiscovery systems, software, and services to law firms and corporate legal departments mishandled its own eDiscovery process. The introductory paragraphs of the Yahoo story seem like a television treatment for a Law and Order rerun:
Guidance Software Inc. bills itself as the leading provider of technology that helps companies dig up old e-mails and other electronic documents that might be evidence in a lawsuit. Yet when Guidance itself had to face a judge, it was accused of bumbling its internal digital search. Whether Guidance intentionally hid documents or just couldn’t find them is a matter of dispute. The company said it did all that was required. But its inability to cough up certain e-mails, even over several months, led an arbitrator to accuse it of gross negligence and proceeding in bad faith.
I don’t quote from Associated Press stories. Their legal eagles frighten 65 year old geese here in Harrod’s Creek. If you have the courage, you can read the Associated Press’s version of this story here. Keep in mind that I don’t know if this is accurate or an elaborate spoof. But I quite fancy the award graphic on the Guidance Web site:
If you want more information about the company, Guidance Software, Inc., click here. If you are looking for an eDiscovery vendor, you might want to double check your short list. I can suggest one outfit that would not make me comfortable if this remarkable Yahoo News story turned out to be accurate. I am on the fence on this use of eDiscovery. Orange jump suit territory if an eDiscovery company could not perform eDiscovery.
Stephen Arnold, February 16, 2009
Mysteries of Online 6: Revenue Sharing
February 16, 2009
This is a short article. I was finishing the revisions to my monetization chapter in Google: The Digital Gutenberg and ran across notes I made in 1996, the year in which I wrote several articles about online for Online Magazine. One of the articles won the best paper award, so if you are familiar with commercial databases, you can track down this loosely coupled series in the LITA reference file or other Dialog databases.
Terms Used in this Write Up
database | A file of electronic information in a format specified by the online vendor; for example Dialog Format A or EBCIDIC |
database producer | An organization that creates a machine-readable file designed to run on a commercial online service |
online revenue | Cash paid to a database producer generated when a user connected to an online database and displayed online or output the results of a search to a file or a hard copy |
online vendor | A commercial enterprise that operated a time sharing service, search system, and customer support service on a fee basis; that is, annual subscription, online connect charge, online type or print charge |
publisher | An organization engaged in creating content by collecting submissions or paying authors to create original articles, reports, tables, and news |
revenue | Money paid by an organization or a user to access an online vendor’s system and then connect and access the content in a specific database; for example, Dialog File 15 ABI/INFORM |
My “mysteries” series has evoked some comments, mostly uninformed. The number of people who started working in search when IBM STAIRS was the core tool are dwindling in number. The people who cut their teeth in the granite choked world of commercial online comprise an even smaller group. Commercial online began with US government funding in the early 1960s, so Ruby loving script kiddies are blissfully ignorant of how online files were built and then indexed. No matter. The lessons form foundation stones in today’s online world.
Indexing and Abstracting: A Backwater
Aggregators collect content from many different sources. In the early days of online, this meant peer reviewed articles. Then the net gathered magazines and non-peer reviewed publications like trade association magazines. Indexing and abstracting in the mid 1960s was a backwater because few publishers knew much about online. Permission to index and abstract was often not required and when a publisher wanted to know why an outfit was indexing and abstracting a publication, the answer was easy. “We are creating a library reference book.” Most publishers cooperated, often providing some of the indexing and abstracting outfits with multiple copies of their publications.
Some of the indexing and abstracting was very difficult; for example, legal, engineering, and medical information posed special problems. The vocabulary used in the documents was specialized, and word lists with Use For and See Also references were essential to indexing and abstracting. The abstract might define a term or an acronym when it referenced certain concepts. When abstracts were included with a journal article, the outfit doing the indexing and abstracting would often ask the publisher if it was okay to include that abstract in the bibliographic record. For decades publishers cooperated.
The reason was that publishers and indexing and abstracting outfits were mutually reinforcing operations. The published collected money from subscribers, members, and in some cases advertisers. The abstracting and indexing shops earned money by creating print and electronic reference materials. In order to “read the full text”, the researcher had to have access to a hard copy of the source document or, in some cases, a microfilm instance of the document.
No money was exchanged in most cases. I think there was trust among publishers and indexing and abstracting outfits. Some of the people engaged in indexing and abstracting crated products so important to certain disciplines that courses were taught in universities worldwide to teach budding scientists and researchers how to “find” and “use” indexes, abstracts, and source documents. Examples include the Chemical Abstracts database, Beilstein, and ABI/INFORM, the database with which I was associated for many years.
Pay to Process Content
By 1982, some publishers were aware that abstracting and indexing outfits were becoming important revenue generators in their own right. Libraries were interested in online, first in catalogs for their patrons, and then in licensing certain content directly from the abstracting and indexing shops. The reason for this interest from libraries (medical, technical, university, public, etc.) was that the technology to ingest a digital file (originally on tape) was becoming available. Second, the cost of using commercial online services which would make hundreds of individual abstract and index databases available was variable. The library (academic or corporate) would obtain a password and a license. Each database incurred a charge, usually billed either by the minute or per query. Then there was online connect charges imposed by outfits like Tymnet or other services. And there were even charges for line returns on the original Lexis system. Libraries had limited budgets, so it made sense for some libraries to cut the variable costs by loading databases on a local system.
By 1985, full text became more attractive to users. The reason was that A&I (abstracting and indexing) services provided pointers. The user then had to go find and read the source document. The convenience of having the bibliographic information and the full text online was obvious to anyone who performed research in anything other than a casual, indifferent manner. The notion of disintermediation expanded first in the A&I field because with full text, why pay to crate a formal bibliographic record and manually assign index terms. The future was full text because systems could provide pointers to documents. Then the document of interest to the researcher could be saved to a file, displayed on screen, or printed for later reference.
The shift from the once innovative A&I business to the full text approach threw a wrench into the traditional reference business. Publishers were suspicious and then fearful that if the full text of their articles were in online systems, subscription revenues would fall. The publishers did not know how much risk these systems poses, but some publishers like Crain’s Chicago Business wanted an upfront payment to permit my organization to crate full text versions of certain articles in the Crain publications. The fees were often in the five figure range and had additional contractual obligations attached. Some of these original constraints may still be in operation.
Negotiating an online deal is similar to haggling to buy a sheep in an open market. The authors were often included among the sheep in the traditional marketplace for information. Source: http://upload.wikimedia.org/wikipedia/commons/thumb/0/0e/Haggling_for_sheep.jpg/800px-Haggling_for_sheep.jpg
Revenue Sharing
Online vendors like Dialog Information Services knew that change was in the air. Some vendors like Dialog and LexisNexis moved to disintermediate the A&I companies. Publishers jockeyed to secure premium deals for their full text material. One deal which still resonates at LexixNexis today was the New York Times’s arrangement with LexisNexis for the New York Times’s content. At its height, the rumor was that LexisNexis paid more than $1 million for the exclusive that put the New York Times’s content in the LexisNexis services. The New York Times decided that it could do better by starting its own online system. Because publishers saw only part of the online puzzle, the New York Times’s decision was a fateful one which has hobbled the company to the present day. The New York Times did not understand the cost of the infrastructure and the importance of habituated users who respond to the magnetism of an aggregate service. Pull out a chunk of content, even the New York Times’s content, and what you get is a very expensive service with insufficient traffic to pay the overall cost of the online operation. Publishers making this same mistake include Dow Jones, the Financial Times, and others. The publishers will bristle at my assertion that their online businesses are doomed to be second string players, but look at where the money is today. I rest my case.
To stay in business, online players cooked up the notion of revenue sharing. There were a number of variations of this business model. The deal was rarely 50 – 50 for the simple reason that as contention and distrust grew among the vendors, the database companies, and the publishers, knowledge of costs was very difficult to get. Without an understanding of costs in online, most organizations are doomed to paddling upstream in a creek that runs red ink. The LexisNexis service may never be able to work off the debt that hangs over the company from its money sucking operations that date from the day the New York Times broke off to go on its own. Dow Jones may never be able to pay off the costs of the original Dow Jones online service which ran on the mainframe BRS search system and then the expensive joint venture with Reuters that is now a unit in Dow Jones called Factiva. Ziff Communications made online pay with its private label CompuServer service and its savvy investments in high margin database and operations that did business as Information Access. Characteristic of Ziff’s acumen, the Ziff organization exited the online database business in the early 1990s and sold off the magazine properties, leaving the Ziff group with another fortune in the midst of the tragedy of Mr. Ziff’s health problems. Other publishers weren’t so prescient.
With knowledge in short supply, here were the principal models used for revenue sharing:
Tactic A: Pool and Payout Based on Percentage of Content from Individual Publishers
This was a simple way to compensate publishers. The aggregator would collect revenues. The aggregator would scrape off an amount to cover various costs. The remainder would then be divided among the content providers based on the amount of content each provider contributed. To keep the model simple (it wasn’t) think of a gross online revenue of $110. Take off $10 for overhead (the actual figure was variable and much larger). The remainder is $100. One publisher provided 60 percent of the content in the pay period. Another publisher provided 40 percent of the content in the pay period. One publisher got a check for $60 and the other a check for $40. The pool approach guarantees that most publishers get some money. It also makes it difficult to explain to a publisher how a particular dollar amount was calculated. Publishers who turned an MBA loose on these deals would usually feel that their “valuable” content was getting short changed. It wasn’t. The fact is that disconnected articles are worth less in a large online file than a collection of articles in a branded traditional magazine. But most publishers and authors today don’t understand this simple fact of the value of an individual item within a very large collection.
I was fascinated when smart publishers would pull out of online services and then try to create their own stand alone online services without understanding the economic forces of online. These forces operate today and few understand them after more than 40 years of use cases.
Truescoop: Social Search with a Twist
February 16, 2009
As social media keeps expanding, privacy and security is in the spotlight, especially on sites like MySpace and Facebook, where you can list home address, birthdays, phone numbers, e-mails, family connections, and post pictures and life details. That information is then available to anyone. Every time you access an application on Facebook, you have to click through a warning screen that tells you that app will be gathering your personal information. And now there’s Truescoop, http://www.truescoop.com, a Facebook tool at http://apps.facebook.com/truescoop/ specifically designed to target that personal information. TrueScoop’s database of millions of records and photos is meant to help people discover personal and criminal histories. So if you find a date on Facebook, you can check them out first, right? Whoa. We all know that there are issues with the Internet and personal privacy, but how much is going too far? Although Truescoop says its service is confidential, the info isn’t – TrueScoop also allows for users to share discoveries with others and comment on someone’s personal profile. Time to be more cautious. Consider what information you post on your blogs and sites carefully. You don’t want some other goose to steal your golden egg.
Jessica W. Bratcher, February 16, 2009
Exclusive Interview with David Milward, CTO, Linguamatics
February 16, 2009
Stephen Arnold and Harry Collier interviewed David Milward,the chief technical officer of Linguamatics, on February 12, 2009. Mr. Milward will be one of the featured speakers at the April 2009 Boston Search Engine Meeting. You will find minimal search “fluff” at this important conference. The focus is upon search, information retrieval, and content processing. You will find no trade show booths staffed, no multi-track programs that distract, and no search engine optimization sessions. The Boston Search Engine Meeting is focused on substance from informed experts. More information about the premier search conference is here. Register now.
The full text of the interview with David Milward appears below:
Will you describe briefly your company and its search / content processing technology?
Linguamatics’ goal is to enable our customers to obtain intelligent answers from text – not just lists of documents. We’ve developed agile natural language processing (NLP)-based technology that supports meaning-based querying of very large datasets. Results are delivered as relevant, structured facts and relationships about entities, concepts and sentiment.
Linguamatics’ main focus is solving knowledge discovery problems faced by pharma/biotech organizations. Decision-makers need answers to a diverse range of questions from text, both published literature and in-house sources. Our I2E semantic knowledge discovery platform effectively treats that unstructured and semi-structured text as a structured, context-specific database they can query to enable decision support.
Linguamatics was founded in 2001, is headquartered in Cambridge, UK with US operations in Boston, MA. The company is privately owned, profitable and growing, with I2E deployed at most top-10 pharmaceutical companies.
What are the three major challenges you see in search / content processing in 2009?
The obvious challenges I see include:
- The ability to query across diverse high volume data sources, integrating external literature with in-house content. The latter content may be stored in collaborative environments such as SharePoint, and in a variety of formats including Word and PDF, as well as semi-structured XML.
- The need for easy and affordable access to comprehensive content such as scientific publications, and being able to plug content into a single interface.
- The demand by smaller companies for hosted solutions.
With search / content processing decades old, what have been the principal barriers to resolving these challenges in the past?
People have traditionally been able to do simple querying across multiple data sources, but there has been an integration challenge in combining different data formats, and typically the rich structure of the text or document has been lost when moving between formats.
Publishers have tended to develop their own tools to support access to their proprietary data. There is now much more recognition of the need for flexibility to apply best of breed text mining to all available content.
Potential users were reluctant to trust hosted services when queries are business- sensitive. However, hosting is becoming more common, and a considerable amount of external search is already happening using Google and, in the case of life science researchers, PubMed.
What is your approach to problem solving in search and content processing?
Our approach encompasses all of the above. We want to bring the power of NLP-based text mining to users across the enterprise – not just the information specialists. As such we’re bridging the divide between domain-specific, curated databases and search, by providing querying in context. You can query diverse unstructured and semi-structured content sources, and plug in terminologies and ontologies to give the context. The results of a query are not just documents, but structured relationships which can be used for further data mining and analysis.
Multi core processors provide significant performance boosts. But search / content processing often faces bottlenecks and latency in indexing and query processing. What’s your view on the performance of your system or systems with which you are familiar?
Our customers want scalability across the board – both in terms of the size of the document repositories that can be queried and also appropriate querying performance. The hardware does need to be compatible with the task. However, our software is designed to give valuable results even on relatively small machines.
People can have an insatiable demand for finding answers to questions – and we typically find that customers quickly want to scale to more documents, harder questions, and more users. So any text mining platform needs to be both flexible and scalable to support evolving discovery needs and maintain performance. In terms of performance, raw CPU speed is sometimes less of an issue than network bandwidth especially at peak times in global organizations.
Information governance is gaining importance. Search / content processing is becoming part of eDiscovery or internal audit procedures. What’s your view of the the role of search / content processing technology in these specialized sectors?
Implementing a proactive e-Discovery capability rather than reacting to issues when they arrive is becoming a strategy to minimize potential legal costs. The forensic abilities of text mining are highly applicable to this area and have an increasing role to play in both eDiscovery and auditing. In particular, the ability to search for meaning and to detect even weak signals connecting information from different sources, along with provenance, is key.
As you look forward, what are some new features / issues that you think will become more important in 2009? Where do you see a major break-through over the next 36 months?
Organizations are still challenged to maximize the value of what is already known – both in internal documents or in published literature, on blogs, and so on. Even in global companies, text mining is not yet seen as a standard capability, though search engines are ubiquitous. This is changing and I expect text mining to be increasingly regarded as best practice for a wide range of decision support tasks. We also see increasing requirements for text mining to become more embedded in employees’ workflows, including integration with collaboration tools.
Graphical interfaces and portals (now called composite applications) are making a comeback. Semantic technology can make point and click interfaces more useful. What other uses of semantic technology do you see gaining significance in 2009? What semantic considerations do you bring to your product and research activities?
Customers recognize the value of linking entities and concepts via semantic identifiers. There’s effectively a semantic engine at the heart of I2E and so semantic knowledge discovery is core to what we do. I2E is also often used for data-driven discovery of synonyms, and association of these with appropriate concept identifiers.
In the life science domain commonly used identifiers such as gene ids already exist. However, a more comprehensive identification of all types of entities and relationships via semantic web style URIs could still be very valuable.
Where can I find more information about your products, services, and research?
Please contact Susan LeBeau (susan.lebeau@linguamatics.com, tel: +1 774 571 1117) and visit www.linguamatics.com.
Stephen Arnold (ArnoldIT.com) and Harry Collier (Infonortics, Ltd.), February 16, 2009
Forbes Calls Microsoft’s Ballmer Insane
February 15, 2009
Wow, not even the addled goose risks headlines like this one in MetaData: “Steve Ballmer Is Insane” here. There’s no allegedly, slightly, or possibly. Just insane. The writer is Wendy Tanaka, and I am shaking my feathers nervously to ponder what she would call this addled goose. Fricasseed? Silly? Cooked? Addled. No, that won’t work I call myself addled.
What’s insane mean? According to Dictionary.com, a property of Ask.com, a source I really trust, insane denotes three meanings:
- not sane; not of sound mind; mentally deranged.
- of, pertaining to, or characteristic of a person who is mentally deranged: insane actions; an insane asylum.
- utterly senseless: an insane plan.
Ms. Tanaka, whom I opine may be younger than the 65 years for this addled goose, may be younger in mind and spirit than I. She focuses on the lousy economy and Microsoft’s decision to open retail stores. To spearhead the retail effort, Microsoft has snatched a Wal*Mart superstar. In Harrod’s Creek, Wal*Mart is not a store. Wal*Mart is the equivalent of a vacation.
My hunch is that Ms. Tanaka and her sources are skeptical of Microsoft’s push into retailing. She cites an MBA trophy generation type wizard from Technology business Research, an outfit with a core competency in retailing I presume. Mr. Krans, allegedly said:
Apple’s retail store rollout coincided with the introduction of the iPod in 2001, which gave a very compelling reason for consumers to visit its locations. …Microsoft brings no such compelling product to bear in its retail entrance, which makes getting consumers in the door a large obstacle to overcome.
This addled goose thinks there are significant benefits to Microsoft retail stores located in Harrod’s Creek. Read on.
A Baker’s Dozen of Benefits from MSFT Retail Shops
Here are some reasons that this addled goose thinks that the Microsoft retail push is such an interesting idea:
- Retail stores will permit Microsoft to showcase the Zune and related products. I saw a Zune case with happy faces in the local Coconut record shop last September
- Individuals interested in the XBox 360 can buy these at the Microsoft store, eliminating a need to go to BestBuy, GameStop, or the other established retail outlets for this product.
- Procurement teams could take a field trip–much like the Harrod Creek residents’ vacation at Wal*Mart to buy the SharePoint Fast ESP product offerings. I think there will be two, maybe three versions, of SharePoint with Fast technology on offer soon
- The local customer support outfit Administrative Services could drop in to the Microsoft retail shop near Fern Valley Road and grab one or more versions of Dynamics along with Windows Server, SQL Server, and any other server needed to make Dynamics sing a happy song
- Display the wide range of mobile devices running Windows Mobile. I don’t think I have seen every Windows Mobile device in one location. What a convenience to disenchanted Nokia, iPhone, and BlackBerry users.
- Offer the complete line up of Microsoft mice and keyboards. Shame about the nifty Microsoft networking products in the compelling pale orange and green boxes.
- Introduce a service bar with Windows geniuses to address questions from customers. I would drop in to get help when my MSDN generated authentication keys don’t work or when the Word 2007 formatting on a Windows system does not stick, yet the formatting works just fine on a Mac with Word 2007 installed.
- Provide a line up of Microsoft T shirts, caps, and other memorabilia, including the new “old” range of gear with MS DOS era logos
- Purchase CALs for various Microsoft products, eliminating the hassle of dealing with the Platinum, Gold, Silver, and other semi precious metal badged partners
- Purchase Microsoft Consulting support so I can get different Microsoft server products to talk to one another and expose their data and metadata to SharePoint
- Sign up for Microsoft Live.com cloud services and get help with the horizontal and sometimes confusing to me “blank” slate interfaces. See item 7 above.
- Meet Microsoft partners, eliminating the need to go to a trade show to learn about “snap in” products that extend, enrich, and sometimes replace Microsoft components that don’t work as advertised for some customer applications.
- Visit with Microsoft executives. I think of this as an extension of the company’s “open door policy.” Nothing will boost share price more than giving retail customers an opportunity to talk with senior Microsoft executives about Vista, usability testing, prices, variants of Windows 7, the difference between MSN.com and Live.com, and job opportunities.
Insane? Wrong. From Harrod’s Creek, the retail plan makes perfect sense. I wonder if the Microsoft retail shop will be in downtown Harrod’s Creek or out by the mine run off pond on Wolf Creek Road? Maybe we’ll get more than one store just like Taco Bell.
Stephen Arnold, February 15, 2009
Google and Torrents: Flashing Yellow Lights
February 15, 2009
Ernesto, writing in Torrent Freak here, may be the first flashing yellow signal for Google’s custom search service. You can learn about the CSE here. The article that caught my attention as I was recycling some old information for part six of my mysteries of online opinion series was “uTorrent Adds Google Powered Torrent Search.” If you don’t know what a torrent is, ask your child or your local hacker. uTorrent is now using “a Google powered torrent search engine”. Ernesto said:
While the added search is not a particular good way to find torrents, its addition to the site is an interesting move by BitTorrent Inc. Not so long ago, uTorrent removed the search boxes to sites like Mininova and isoHunt from their client, as per requests from copyright holders. However, since BitTorrent Inc. closed its video store, there is now no need to please Hollywood and they are free to link to torrent sites again.
With more attention clumping to pirated software and digital content, Ernesto’s article might become the beacon that attracts legal eagles, regulators, and folks looking to get something for nothing. I will keep my eye open for other Google assertions. Until I get more information, I want to remind you that I am flagging an article by another person. I am not verifying Ernesto’s point. The story could be the flashing signal or a dead bulb. I find it interesting either way. Google’s index has many uses; for example, looking for the terms hack, password, confidential, etc.
Stephen Arnold, February 15, 2009