Algorithms Are Getting Smarter at Identifying Human Behavior

June 19, 2017

Algorithm deployed by large tech firms are better at understanding human behaviors, reveals former Google data scientist.

In an article published by Business Insider titled A Former Google Data Scientist Explains Why Netflix Knows You Better Than You Know Yourself, Seth Stephens-Davidowitz says:

Many gyms have learned to harness the power of people’s over-optimism. Specifically, he said, “they’ve figured out you can get people to buy monthly passes or annual passes, even though they’re not going to use the gym nearly enough to warrant this purchase.

Companies like Netflix use this to their benefit. For instance, during initial years, Netflix used to encourage users to create playlists. However, most users ended up watching the same run of the mill content. Netflix thus made changes and started recommending content that was similar to their content watching habits. It only proves one thing, algorithms are getting smarter at understanding and predicting human behaviors, and that is both good and bad.

Vishal Ingole,  June 19, 2017

Crowd Wisdom Adjusted to Measure Information Popularity

June 2, 2017

The article on ScienceDaily titled In Crowd Wisdom, the ‘Surprisingly Popular’ Answer Can Trump Ignorance of the Masses conveys the latest twist on crowd wisdom, or efforts to answer questions by asking many people rather than specialists. Unsurprisingly, crowd wisdom often is not very wise at all, but rather favors the most popular information. The article uses the example of asking various populations whether Philadelphia is the capital of Pennsylvania. Those who answered yes also believed that others would agree, making it a popular answer. The article goes on to explain,

Meanwhile, a certain number of respondents knew that the correct answer is “no.” But these people also anticipated that many other people would incorrectly think the capital is Philadelphia, so they also expected a very high percentage of “yes” answers. Thus, almost everyone expected other people to answer “yes,” but the actual percentage of people who did was significantly lower. “No” was the surprisingly popular answer because it exceeded expectations of what the answer would be.

By measuring the perceived popularity of a given answer, researchers saw errors reduced by over 20% compared to straightforward majority votes, and by almost 25% compared to confidence-weighted votes. As in the case of the Philadelphia question above, those who predicted that they were in the minority deserve the most attention because they had enough information to expect that many people would incorrectly vote yes. If you take away nothing else from this, let it be that Harrisburg, not Philly, is the capital of Pennsylvania.

Chelsea Kerwin, June 2, 2017

How Data Science Pervades

May 2, 2017

We think Information Management may be overstating a bit with the headline, “Data Science Underlies Everything the Enterprise Now Does.”  While perhaps not underpinning quite “everything,” the use of data analysis has indeed spread throughout many companies (especially larger ones).

Writer Michael O’Connell cites a few key developments over the last year alone, including the rise of representative data, a wider adoption of predictive analysis, and the refinement of customer analytics. He predicts, even more, changes in the coming year, then uses a hypothetical telecom company for a series of examples. He concludes:

You’ll note that this model represents a significant broadening beyond traditional big data/analytics functions. Such task alignment and comprehensive integration of analytics functions into specific business operations enable high-value digital applications ranging far beyond our sample Telco’s churn mitigation — cross-selling, predictive and condition-based maintenance, fraud detection, price optimization, and logistics management are just a few areas where data science is making a huge difference to the bottom line.

See the article for more on the process of turning data into action, as illustrated with the tale of that imaginary telecom’s data-wrangling adventure.

Cynthia Murrell, May 2, 2017

IBM Uses Watson Analytics Freebie Academic Program to Lure in Student Data Scientists

May 6, 2016

The article on eWeek titled IBM Expands Watson Analytics Program, Creates Citizen Data Scientists zooms in on the expansion of the IBM  Watson Analytics academic program, which was begun last year at 400 global universities. The next phase, according to Watson Analytics public sector manager Randy Messina, is to get Watson Analytics into the hands of students beyond computer science or technical courses. The article explains,

“Other examples of universities using Watson Analytics include the University of Connecticut, which is incorporating Watson Analytics into several of its MBA courses. Northwestern University is building Watson Analytics into the curriculum of its Predictive Analytics, Marketing Mix Models and Entertainment Marketing classes. And at the University of Memphis Fogelman College of Business and Economics, undergraduate students are using Watson Analytics as part of their initial introduction to business analytics.”

Urban planning, marketing, and health care disciplines have also ushered in Watson Analytics for classroom use. Great, so students and professors get to use and learn through this advanced and intuitive platform. But that is where it gets a little shady. IBM is also interested in winning over these students and leading them into the data analytics field. Nothing wrong with that given the shortage of data scientists, but considering the free program and the creepy language IBM uses like “capturing mindshare among young people,” one gets the urge to warn these students to run away from the strange Watson guy, or at least proceed with caution into his lair.

Chelsea Kerwin, May 6, 2016

Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

 

Open Source Boundaries

July 3, 2015

Now here is an interesting metaphor to explain how open source is sustainable.  On OpenSource.com, Bryan Behrenshausen posted the article, “Making Collaboration Sustainable” that references the famous scene from Tom Sawyer, where the title character is forced to whitewash a fence by his Aunt Polly.  He does not want to do it, but is able to persuade his friends that whitewashing is fun and has them pay him for the privilege.

Jim Whitehurst refers to it as the “Tom Sawyer” model, where organizations treat communities as gullible chumps who will work without proper compensation.  It is a type of crowdsourcing, where the organizations benefit from the communities’ resources to further their own goals.  Whitehurst continues that this is not a sustainable approach to crowdsourcing.  It could even backfire at some point.

He continues to saw open source requires a different mindset, one that has a commitment from its contributors and everyone is equal and must be treated/respected for their efforts.

“Treating internal and external communities as equals, really listening to and understanding their shared goals, and locating ways to genuinely enhance those goals—that’s the key to successfully open sourcing a project. Crowdsourcing takes what it can; it turns people and their ideas into a resource. Open sourcing reciprocates where it can; it channels people and their ideas into a productive community.”

The entire goal of open source is to work with a community that coalesces around shared beliefs and passions.  Behrenshausen finishes with that an organization might find themselves totally changed by engaging with an open source community and it could be for the better.  Is that a good thing or a bad thing?  It is, however, concerning for enterprise search solutions.

Whitney Grace, July 3, 2015

Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

 

Kroll Ontrack Enjoys Predictive Coding Award

October 20, 2014

What happened to Recommind and ZyLAB? We thought they were eDiscovery frontrunners, but now BusinessWire tells us, “Kroll Ontrack Voted Best Predictive Coding Solution in New York Law Journal Survey.” The 2014 survey tallied votes in 90 categories from readers of the law journal ALM. The press release quotes Kroll Ontrack’s VP of product management, John Grancarich:

“We are honored to be chosen as the leading predictive coding technology in the industry by New York Law Journal readers. With a focus on amplifying the power of your best reviewers, this award demonstrates the impact ediscovery.com Review predictive coding technology has in driving increased speed, consistency and accuracy in document review.”

The strength of the predictive coding platform, we are told, comes from three parts that work together: workflow technology, “smart training” technology, and quality control/ sampling technology. The write-up emphasizes:

“Given the innovative volume control mechanisms of ediscovery.com Review, the award-winning power of Kroll Ontrack’s predictive coding is available throughout the entire culling, filtering, early data assessment and review experience. For more information about Kroll Ontrack predictive coding technology, visit http://www.ediscovery.com/solutions/review/ or watch a demo at http://www.ediscovery.com/review-demo/.”

Headquartered in Eden Prairie, Minnesota, Kroll Ontrack launched as a software firm in 1985. The company’s work with damaged hard drives led to a focus on data recovery. Now, Kroll Ontrack supplies a wealth of data-related solutions to customers in the legal, corporate, and government arenas.

Cynthia Murrell, October 20, 2014

Sponsored by ArnoldIT.com, developer of Augmentext

Google Searches, Prediction, and Fabulous Stock Market Returns?

July 28, 2014

I read “Google Searches Hold Key to Future Market Crashes.” The main idea in my opinion is:

Moat [female big thinker at Warwick Business School’ continued, “Our results are in line with the hypothesis that increases in searches relating to both politics and business could be a sign of concern about the state of the economy, which may lead to decreased confidence in the value of stocks, resulting in transactions at lower prices.”

So will the Warwick team cash in on the stock market?

Well, there is a cautionary item as well:

“Our results provide evidence of a relationship between the search behavior of Google users and stock market movements,” said Tobias Preis, Associate Professor of Behavioral Science and Finance at Warwick Business School. “However, our analysis found that the strength of this relationship, using this very simple weekly trading strategy, has diminished in recent years. This potentially reflects the increasing incorporation of Internet data into automated trading strategies, and highlights that more advanced strategies are now needed to fully exploit online data in financial trading.”

Rats. Quants are already on this it seems.

What’s fascinating to me is that the Warwick experts overlooked a couple of points; namely:

  1. Google is using its own predictive methods to determine what users see when they get a search result based on the behavior of others. Recursion, anyone?
  2. Google provides more searches with each passing day to those using mobile devices. By their nature, traditional desktop queries are not exactly the same as mobile device searches. As a workaround, Google uses clusters and other methods to give users what Google thinks the user really wants. Advertising, anyone?
  3. The stock pickers that are the cat’s pajamas at the B school have to demonstrate their acumen on the trading floor. Does insider trading play a role? Does working at a Goldman Sachs-type of firm help a bit?

Like perpetual motion, folks will keep looking for a way to get an edge. Why are large international banks paying some hefty fines? Humans, I believe, not algorithms.

Stephen E Arnold, July 28, 2014

ZyLabs Mary Mack Urges Caution with Predictive Coding

July 9, 2014

An article titled ZyLAB’s Mary Mack on Predictive Coding Myths and Traps for the Unwary on The eDisclosure Information Project offers some insight into the trend of viewing predictive coding as some form of “magic.” This idea is quickly brushed aside and predictive coding is allocated back to the realm of statistics and technology. The article quotes Mary Mack of ZyLab,

“Machine learning and artificial intelligence for legal applications is our future. It’s a wonderful advance that the judiciary is embracing machine-assisted review in the form of predictive coding. While we steadily move into the second and much less risky generation of predictive coding, there are still traps and pitfalls that are better considered early for mitigation. This session and the session on eDiscovery taboos will expose a few concerns to consider when evaluating predictive coding for specific or portfolio litigation.”

In this article ZyLab offers a counterpoint to Recommind, which asserted in a recent article that predictive coding was to eDiscovery like a GPS is to driving cross-country. ZyLab prefers a much more cautious approach to the innovative technology. The article stresses an objective, fact-based discussion on the merits and pitfalls of predictive coding is a necessary step in its growth.

Chelsea Kerwin, July 09, 2014

Sponsored by ArnoldIT.com, developer of Augmentext

Predictive Coding for eDiscovery Users in a Hurry

July 9, 2014

The article on Recommind titled Why eDiscovery Needs GPS (And a Soundtrack) whimsically applies the basic tenets of GPS to the eDiscovery process with the aid of song titles. If you can get through the song titles bit, there is some meat to the article, though not much. He suggests several areas where predictive coding might make eDiscovery easier and more efficient. The author explains his thinking,

“A good eDiscovery navigator will help you take a reliable Estimation Sample… early on to determine the statistically likely number of responsive documents for any issue in your matter.  It will then plot that destination clearly, along with the appropriate margin of error, and show your status toward it at every point along The Long and Winding Road. It should also clearly display the responsiveness levels you’re experiencing with each iteration as you review the machine-suggested document batches.”

The type of guidance and efficiency that predictive coding offers is already being utilized by companies conducting internal investigations and “reviewing data already seized by a regulatory agency.” The author conditions the usefulness of predictive coding on its being flexible and able to recalculate based on any change in direction.When speed and effectiveness are of paramount importance, a GPS for eDiscovery might be the best possible tool.

Chelsea Kerwin, July 09, 2014

Sponsored by ArnoldIT.com, developer of Augmentext

DuPont v Kolon Industries Shines Light on Keyword Search and Spoliation and Ignores Predictive Coding

June 27, 2013

The article on e-discovery 2.0 titled The eDiscovery Trinity: Spoliation Sanctions, Keywords and Predictive Coding explores the three issues most relevant to clients and council. One case cited is Dupont v. Kolon Industries, an intellectual property lawsuit in which Kolon’s complaint was that DuPont’s forensic experts failed to exercise an efficient keyword search, meaning that of the nearly 18,000 hits only about ten percent were relevant. The article explains,

“Kolon then asserted that the “reckless inefficiency” of the search methodology was “fairly attributable to the fact that DuPont ran insipid keywords like ‘other,’ ‘news,’ and ‘mail.’” The court observed how important search terms had become in discovery: “… in the current world of litigation, where so many documents are stored and, hence, produced, electronically, the selection of search terms is an important decision because it, in turn, drives the subsequent document discovery, production and review.”

Ultimately the court favored Dupont, calling their efforts reasonable. The article mentions that although spoliation and keywords were taken into consideration in this particular case, it did not address predictive coding. What would have happened if DuPont had utilized predictive coding is entirely hypothetical, but some do argue that it could have minimized the cost and produced the same group of relevant documents. The article, though an evocative metaphor for eDiscovery, is certainly not the end of the debate.

Chelsea Kerwin, June 27, 2013

Sponsored by ArnoldIT.com, developer of Augmentext

Next Page »

  • Archives

  • Recent Posts

  • Meta