Google Searches, Prediction, and Fabulous Stock Market Returns?

July 28, 2014

I read “Google Searches Hold Key to Future Market Crashes.” The main idea in my opinion is:

Moat [female big thinker at Warwick Business School’ continued, “Our results are in line with the hypothesis that increases in searches relating to both politics and business could be a sign of concern about the state of the economy, which may lead to decreased confidence in the value of stocks, resulting in transactions at lower prices.”

So will the Warwick team cash in on the stock market?

Well, there is a cautionary item as well:

“Our results provide evidence of a relationship between the search behavior of Google users and stock market movements,” said Tobias Preis, Associate Professor of Behavioral Science and Finance at Warwick Business School. “However, our analysis found that the strength of this relationship, using this very simple weekly trading strategy, has diminished in recent years. This potentially reflects the increasing incorporation of Internet data into automated trading strategies, and highlights that more advanced strategies are now needed to fully exploit online data in financial trading.”

Rats. Quants are already on this it seems.

What’s fascinating to me is that the Warwick experts overlooked a couple of points; namely:

  1. Google is using its own predictive methods to determine what users see when they get a search result based on the behavior of others. Recursion, anyone?
  2. Google provides more searches with each passing day to those using mobile devices. By their nature, traditional desktop queries are not exactly the same as mobile device searches. As a workaround, Google uses clusters and other methods to give users what Google thinks the user really wants. Advertising, anyone?
  3. The stock pickers that are the cat’s pajamas at the B school have to demonstrate their acumen on the trading floor. Does insider trading play a role? Does working at a Goldman Sachs-type of firm help a bit?

Like perpetual motion, folks will keep looking for a way to get an edge. Why are large international banks paying some hefty fines? Humans, I believe, not algorithms.

Stephen E Arnold, July 28, 2014

ZyLabs Mary Mack Urges Caution with Predictive Coding

July 9, 2014

An article titled ZyLAB’s Mary Mack on Predictive Coding Myths and Traps for the Unwary on The eDisclosure Information Project offers some insight into the trend of viewing predictive coding as some form of “magic.” This idea is quickly brushed aside and predictive coding is allocated back to the realm of statistics and technology. The article quotes Mary Mack of ZyLab,

“Machine learning and artificial intelligence for legal applications is our future. It’s a wonderful advance that the judiciary is embracing machine-assisted review in the form of predictive coding. While we steadily move into the second and much less risky generation of predictive coding, there are still traps and pitfalls that are better considered early for mitigation. This session and the session on eDiscovery taboos will expose a few concerns to consider when evaluating predictive coding for specific or portfolio litigation.”

In this article ZyLab offers a counterpoint to Recommind, which asserted in a recent article that predictive coding was to eDiscovery like a GPS is to driving cross-country. ZyLab prefers a much more cautious approach to the innovative technology. The article stresses an objective, fact-based discussion on the merits and pitfalls of predictive coding is a necessary step in its growth.

Chelsea Kerwin, July 09, 2014

Sponsored by ArnoldIT.com, developer of Augmentext

Predictive Coding for eDiscovery Users in a Hurry

July 9, 2014

The article on Recommind titled Why eDiscovery Needs GPS (And a Soundtrack) whimsically applies the basic tenets of GPS to the eDiscovery process with the aid of song titles. If you can get through the song titles bit, there is some meat to the article, though not much. He suggests several areas where predictive coding might make eDiscovery easier and more efficient. The author explains his thinking,

“A good eDiscovery navigator will help you take a reliable Estimation Sample… early on to determine the statistically likely number of responsive documents for any issue in your matter.  It will then plot that destination clearly, along with the appropriate margin of error, and show your status toward it at every point along The Long and Winding Road. It should also clearly display the responsiveness levels you’re experiencing with each iteration as you review the machine-suggested document batches.”

The type of guidance and efficiency that predictive coding offers is already being utilized by companies conducting internal investigations and “reviewing data already seized by a regulatory agency.” The author conditions the usefulness of predictive coding on its being flexible and able to recalculate based on any change in direction.When speed and effectiveness are of paramount importance, a GPS for eDiscovery might be the best possible tool.

Chelsea Kerwin, July 09, 2014

Sponsored by ArnoldIT.com, developer of Augmentext

DuPont v Kolon Industries Shines Light on Keyword Search and Spoliation and Ignores Predictive Coding

June 27, 2013

The article on e-discovery 2.0 titled The eDiscovery Trinity: Spoliation Sanctions, Keywords and Predictive Coding explores the three issues most relevant to clients and council. One case cited is Dupont v. Kolon Industries, an intellectual property lawsuit in which Kolon’s complaint was that DuPont’s forensic experts failed to exercise an efficient keyword search, meaning that of the nearly 18,000 hits only about ten percent were relevant. The article explains,

“Kolon then asserted that the “reckless inefficiency” of the search methodology was “fairly attributable to the fact that DuPont ran insipid keywords like ‘other,’ ‘news,’ and ‘mail.’” The court observed how important search terms had become in discovery: “… in the current world of litigation, where so many documents are stored and, hence, produced, electronically, the selection of search terms is an important decision because it, in turn, drives the subsequent document discovery, production and review.”

Ultimately the court favored Dupont, calling their efforts reasonable. The article mentions that although spoliation and keywords were taken into consideration in this particular case, it did not address predictive coding. What would have happened if DuPont had utilized predictive coding is entirely hypothetical, but some do argue that it could have minimized the cost and produced the same group of relevant documents. The article, though an evocative metaphor for eDiscovery, is certainly not the end of the debate.

Chelsea Kerwin, June 27, 2013

Sponsored by ArnoldIT.com, developer of Augmentext

Predictive Apps Continue to Evolve

June 10, 2013

Algorithms that mine our data to predict what we want or need are getting more sophisticated. The MIT Technology Review reports, “With Personal Data, Predictive Apps Stay a Step Ahead.” Recently, Google Now (part of the latest Android version and now included in the Google search app for the iPhone) has captured some attention. That app pulls information from a user’s Gmail, Google Calendar, and Google Web searches to spontaneously present timely, relevant (ideally) information, like traffic conditions between office and home as one is wrapping up the workday.

The next stage of this predictive ability is on its way. Reporter Tom Simonite tells us:

“Engineers at Google, Osito, and elsewhere seek to wring more insights from the data they collect about their users. Osito’s engineers are working to learn more from a person’s past location traces to refine predictions of future activity, says [Osito's Bill]Ferrell. Google Now recently began showing the weather in places it believes you’re headed to soon. It can also notify you of nearby properties for sale if you have recently done a Web search suggesting you’re looking for a new home.

“Machine learning experts at Grokr, a predictive app for the iPhone, have found they can divine the ethnicity, gender, and age of their users to a high degree of accuracy, says CEO Srivats Sampath. ‘That can help us predict places you might like to go better,’ he says. The information will be used to fine-tune the recommendations Grokr offers for restaurants and music events.”

Is the trend creepy or helpful? A bit of both, perhaps. See the article for more on the current state of this “predictive intelligence.”

My apprehension goes beyond privacy and past any discomfort with increasingly sophisticated AI. I am concerned that we are giving more fuel to the already raging confirmation-bias fire. If our devices serve up only information and entertainment we are predisposed to, how likely are we learn anything new? More broadly, the chances of conversing intelligently with someone on the other side of any professional, cultural, or political divide will continue to dwindle, since each party is relying on a different set of “facts.”

Ah, well, there is no going backward. Perhaps someone could design an app that deliberately suggests bits of content we would otherwise avoid as a way to combat our own prejudices. I would use it, and I suspect other independent thinkers would, too. Any developers out there feel like taking on a socially beneficial project?

Cynthia Murrell, June 10, 2013

Sponsored by ArnoldIT.com, developer of Augmentext

Predictive Coding: Who Is on First? What Is the Betting Game?

December 20, 2012

I am confused, but what’s new? The whole “predictive analytics” rah rah causes me to reach for my NRR 33 dB bell shaped foam ear plugs.

Look. If predictive methods worked, there would be headlines in the Daily Racing Form, in the Wall Street Journal, and in the Las Vegas sports books. The cheerleaders for predictive wizardry are pitching breakthrough technology in places where accountability is a little fuzzier than a horse race, stock picking, and betting on football games.

image

The godfather of cost cutting for legal document analysis. Revenend Thomas Bayes, 1701 to 1761. I heard he said, “Praise be, the math doth work when I flip the numbers and perform the old inverse probability trick. Perhaps I shall apply this to legal disputes when lawyers believe technology will transform their profession.” Yep, partial belief. Just the ticket for attorneys. See http://goo.gl/S5VSR.

I understand that there is PREDICTION which generates tons of money to the person who has an algorithm which divines which nag wins the Derby, which stock is going to soar, and which football team will win a particular game. Skip the fuzzifiers like 51 percent chance of rain. It either rains or it does not rain. In the harsh world of Harrod’s Creek, capital letter PREDICTION is not too reliable.

The lower case prediction is far safer. The assumptions, the unexamined data, the thresholds hardwired into the off-the-shelf algorithms, or the fiddling with Bayesian relaxation factors is aimed at those looking to cut corners, trim costs, or figure out which way to point the hit-and-miss medical research team.

Which is it? PREDICTION or prediction.

I submit that it is lower case prediction with an upper case MARKETING wordsmithing.

Here’s why:

I read “The Amazing Forensic Tech behind the Next Apple, Samsun Legal Dust Up (and How to Hack It).” Now that is a headline. Skip the “amazing”, “Apple”, “Samsung,” and “Hack.” I think the message is that Fast Company has discovered predictive text analysis. I could be wrong here, but I think Fast Company might have been helped along by some friendly public relations type.

Let’s look at the write up.

First, the high profile Apple Samsung trial become the hook for “amazing” technology. the idea is that smart software can grind through the text spit out from a discovery process. In the era of a ballooning digital data, it is really expensive to pay humans (even those working at a discount in India or the Philippines) to read the emails, reports, and transcripts.

Let a smart machine do the work. It is cheaper, faster, and better. (Shouldn’t one have to pick two of these attributes?)

Fast Company asserts:

“A couple good things are happening now,” Looby says. “Courts are beginning to endorse predictive coding, and training a machine to do the information retrieval is a lot quicker than doing it manually.” The process of “Information retrieval” (or IR) is the first part of the “discovery” phase of a lawsuit, dubbed “e-discovery” when computers are involved. Normally, a small team of lawyers would have to comb through documents and manually search for pertinent patterns. With predictive coding, they can manually review a small portion, and use the sample to teach the computer to analyze the rest. (A variety of machine learning technologies were used in the Madoff investigation, says Looby, but he can’t specify which.)

Read more

IBM Asks Britain to Discover Full Potential of Crime Analysis Software

December 14, 2012

England and Wales residents are soon to elect local cop chiefs, and IBM is already trying to help the new force with a little advice regarding predictive model tech. According to the article “IBM Begs Britain’s New Top Cops: C’mon, Set Up Pre-Crime Units” on The Register, UK already uses IBM’s SPSS statistics module and 12 analyst notebook, but apparently not to the full potential of the software. Instead of crime prevention, the software is being used for “beancounting” and  basic statistical analysis.

The article comments on the potential of the predictive content:

“IBM believe British forces should hit the beat on crime prevention by employing content analysis and predictive modeling using unstructured data – something that comprises 95 per cent of the data police handle in the form of video, written statements, crime reports, media, Tweets – along with the structured stuff. Also, police should be able to draw on data from sources outside of day-to-day policing – groups involved in housing and education.”

The article states that in joining forces with US police, one specific cooperating department has reduced crime by 30 percent by predicting where a crime would happen.

Seems like IBM is a big motion picture fan. First, we note Watson is eerily similar to 2001’s smart computer HAL. Now Minority Report is moving the company toward PreCrime if this report is accurate. Next up: Disney’s Episode VII of Star Wars? We will be waiting with our popcorn.

Andrea Hayden, December 14, 2012

Sponsored by ArnoldIT.com, developer of Augmentext

Google Gets More Predictive

October 30, 2012

I heard the cheerleading over the news broadcasts about the terrible storm. I urge you to read “Google Now: Behind the Predictive Future of Search.” But the “real” story from the “real” journalist is the subtitle: “How Google Learned to Un Fragment Itself and Create the Next Big Thing.” Faint praise. No. Bold assertions about the “un fragmented Google.”

The guts of the story pivot on Google’s new service Google Now. The idea is that “now” information is what defines the modern mobile user. I use my mobile as a phone and to check email. Therefore, I struggle with the “predictive future” thing.

The idea is that

… your phone is more “Personal Assistant” than “Bar bet settler.” The difference is that the former actually understands what you need while the latter is a blunt search instrument.

Universal appeal is assumed. The secret ingredient for the predictive search magic is Android 4.2.

Here’s the write up’s digest of the “big thing”:

It’s essentially an app that combines two important functions: voice search and “cards” that bubble up relevant information on a contextual basis. Actually, Google Now technically only refers to the ambient information part of the equation, a branding kerfuffle that distinguishes it from Apple’s Siri product yet still causes confusion. Those cards might contain local restaurants, the traffic on your commute home, or when your flight is about to take off. They appear automatically as Google tries to guess the information you’ll need at any given moment. While it seems like a relatively simple service, it’s only really possible because of the massive amount of computational power Google can leverage alongside the massive amount of data Google knows about you thanks to your searches.

The predictive search functionality has been part of Google Web search since August 2012. The key point is:

These new cards are actually similar to a feature that Google added to its web search results this past August, both in content and in style. That’s probably not an accident — if you assume Google has already won the battle for search, the next battle is giving you information before you even search for it. When it comes to deciding which data to give you, Barra tells us that Google has “a pipeline [...], possibly in the hundreds of cards” from its many engineering teams. Rather than flood users with all of those new cards, Google is taking a slow and steady approach to adding those new features — if only because right now it can only add those cards with a software update.

The numerical recipes behind the Now service include neural networks (what I call smart software) and knowledge graphs (entity relationships). Both of these methods have been in development and use for years. Google itself owns a chunk of a company which has quite sophisticated predictive technology. There is more to come from Google, including hot visualizations and improved voice interaction with mobile devices.

If you want to see a write up that puts the Dallas Cowboy cheerleaders to shame, check out this story. Like the cheerleaders, there will be changes in the line up with each update cycle. For now, the magic is in the eye of the True Believer.

I just make voice calls and check email.

Stephen E Arnold, October 31, 2012

Recommind Publishes Predictive Coding Guide

October 8, 2012

Darned amazing. It is like rocket science for dummies. The Wall Street Journal’s Market Watch reports, “Recommind Announces ‘Predictive Coding for Dummies’.” The publication, part of the “for Dummies” series of manuals, aims to help document reviewers speed and automate their process. The press release explains:

“This guide is a definitive text covering the challenges of document review in eDiscovery, what makes it vital to legal cases, and what to look for in an eDiscovery solution. ‘Predictive Coding for Dummies’ also outlines real-world cost savings through Predictive Coding solutions like Axcelerate Review & Analysis, Recommind’s leading end-to-end eDiscovery product. . . .

“Through hundreds of implementations, Recommind understands firsthand the high cost associated with using old approaches to document review and the benefits an eDiscovery solution provides. Recommind’s eDiscovery solution is designed to address the specific context of today’s law firms and legal departments, including the ever-increasing volume of information.”

Though it sounds like the guide may amount to an info-advertisement for Recommind’s products, you may be able to glean some useful nuggets from it. Chapter titles include “Information Explosion and Electronic Discovery”; “Putting Predictive Coding to Work”; and “The Top Benefits of Predictive Coding.”

Axcelerate eDiscovery is Recommind‘s flagship product, based on their CORE platform. The company was formed in 2000. It is headquartered in San Francisco, and maintains offices around the world.

Cynthia Murrell, Occtober 08, 2012

Sponsored by ArnoldIT.com, developer of Augmentext

Google Autocomplete: Is Smart Help a Hindrance?

September 10, 2012

You may have heard of the deep extraction company Attensity. There is another company in a similar business with the name inTTENSITY. Not the playful misspelling of the common word “intensity.” What happens when a person looking for the company inTTENSITY get when he or she runs a query on Google. Look at what Google’s autocomplete suggestions recommend when I type intten:

image

The company’s spelling appears along with the less helpful “interstate ten”, “internet explorer ten”, and “internet icon top ten.” If I enter “inten”, I don’t get the company name. No surprise.

image

Is Google’s autocomplete a help or hindrance? The answer, in my opinion, is it depends on the users and what he or she is seeking.

I just read “Germany’s Former First Lady Sues Google For Defamation Over Autocomplete Suggestions.” According to the write up:

When you search for “Bettina Wulff” on Google, the search engine will happily autocomplete this search with terms like “escort” and “prostitute.” That’s obviously not something you would like to be associated with your name, so the wife of former German president Christian Wulff has now, according to Germany’s Süddeutschen Zeitung, decided to sue Google for defamation. The reason why these terms appear in Google’s autocomplete is that there have been persistent rumors that Wulff worked for an escort service before she met her husband. Wulff categorically denies that this is true.

The article explains that autocomplete has been the target of criticism before. The concluding statement struck me as interesting:

In Japan, a man recently filed a suit against Google after the autocomplete feature started linking his names with a number of crimes he says he wasn’t involved in. A court in Japan then ordered Google to delete these terms from autocomplete. Google also lost a similar suit in Italy in 2011.

I have commented about the interesting situations predictive algorithms can create. I assume that Google’s numerical recipes chug along like a digital and intent-free robot.

Read more

Next Page »