Customize Your News with Semantic Search

January 28, 2016

There are many apps available that can aggregate news stories that cater to your interests: Feedly, Google News, Pulp, and other RSS feeders.  While these apps have their strengths and weaknesses, one question you need to ask is: do they use semantic search?  If you want a news app designed specifically to bring you news stories using semantic search there is “Algo: Semantic Search Engine For Customizable News” and it can be purchased on iTunes.

SkyGrid developed Algo and Apple named it a “Best News App”.  It has earned a 4.5 star rating.  Algo was designed to keep users up-to-date on news, follow topics of interest, and your favorite publications to create your own personalized newspaper.

Algo is described as:

“The only true real-time news aggregator. Simple, fast, and reliable, Algo is the only place to follow all of your favorite topics and interests. Search for anything you want! From people to TV shows to companies to finance, follow your interests on Algo. Set notifications for each topic and be notified as information updates in real-time.”

Other Algo features are ability to share articles on any service, save favorite articles, notification settings, and up-to-date news in real time.  Algo’s reliance on semantic search is one of the reasons why it has gained such favor with Apple and iTunes users.

 

Whitney Grace, January 28, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

Woman Fights Google and Wins

January 21, 2016

Google is one of those big corporations that if you have a problem with it, you might as well let it go.  Google is powerful, respected, and has (we suspect) a very good legal department.  There are problems with Google, such as the “right to be forgotten” and Australian citizens have a big bone to pick with the search engine.  Australian News reports that “SA Court Orders Google Pay Dr. Janice Duffy $115,000 Damages For Defamatory Search Results.”

Duffy filed a lawsuit against Google for displaying her name along with false and defamatory content within its search results.  Google claimed no responsibility for the actual content, as it was not the publisher.  The Australian Supreme Court felt differently:

“In October, the court rejected Google’s arguments and found it had defamed Dr Duffy due to the way the company’s patented algorithm operated.  Justice Malcolm Blue found the search results either published, republished or directed users toward comments harmful to her reputation.  On Wednesday, Justice Blue awarded Dr Duffy damages of $100,000 and a $15,000 lump sum to cover interest.”

Duffy was not the only one who was upset with Google.  Other Australians filed their own complaints, including Michael Trkulja with a claim search results linked him to crime and Shane Radbone sued to learn the identities of bloggers who wrote negative comments.

It does not seem that Google should be held accountable, but technically they are not responsible for the content.  However, Google’s algorithms are wired to bring up the most popular and in-depth results.  Should they develop a filter that measures negative and harmful information or is it too subjective?

 

Whitney Grace, January 21, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

 

Hello, Big Algorithms

January 15, 2016

The year had barely started and it looks lime we already have a new buzzword to nestle into our ears: big algorithms.  The term algorithm has been tossed around with big data as one of the driving forces behind powerful analytics.  Big data is an encompassing term that refers to privacy, security, search, analytics, organization, and more.  The real power, however, lies in the algorithms.  Benchtec posted the article, “Forget Big Data-It’s Time For Big Algorithms” to explain how algorithms are stealing the scene.

Data is useless unless you are able to are pull something out of it.  The only way get the meat off the bone is to use algorithms.  Algorithms might be the powerhouses behind big data, but they are not unique.  The individual data belonging to different companies.

“However, not everyone agrees that we’ve entered some kind of age of the algorithm.  Today competitive advantage is built on data, not algorithms or technology.  The same ideas and tools that are available to, say, Google are freely available to everyone via open source projects like Hadoop or Google’s own TensorFlow…infrastructure can be rented by the minute, and rather inexpensively, by any company in the world. But there is one difference.  Google’s data is theirs alone.”

Algorithms are ingrained in our daily lives from the apps run on smartphones to how retailers gather consumer detail.  Algorithms are a massive untapped market the article says.  One algorithm can be manipulated and implemented for different fields.  The article, however, ends on some socially conscious message about using algorithms for good not evil.  It is a good sentiment, but kind of forced here, but it does spur some thoughts about how algorithms can be used to study issues related to global epidemics, war, disease, food shortages, and the environment.

Whitney Grace, January 15, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

IBM and Yahoo Hard at Work on Real-Time Data Handling

January 7, 2016

The article titled What You Missed in Big Data: Real-time Intelligence on SiliconAngle speaks to the difficulties of handling the ever-increasing volumes of real-time data for corporations. Recently, IBM created supplementary stream process services including a machine learning engine that comes equipped with algorithm building capabilities. The algorithms aid in choosing relevant information from the numerous connected devices of a single business. The article explains,

“An electronics manufacturer, for instance, could use the service to immediately detect when a sensor embedded in an expensive piece of equipment signals a malfunction and automatically alert the nearest technician. IBM is touting the functionality as a way to cut through the massive volume of machine-generated signals produced every second in such environments, which can overburden not only analysts but also the technology infrastructure that supports their work.”

Yahoo has been working on just that issue, and lately open-sourced its engineers’ answer. In a demonstration to the press, the technology proved able to power through 100 million vales in under three seconds. Typically, such a high number would require two and a half minutes. The target of this sort of technology is measuring extreme numbers like visitor statistics. Accuracy takes a back seat to speed through estimation, but at such a speed it’s worth the sacrifice.

Chelsea Kerwin, January 7, 2016

Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

Marketing Analytics Holds Many Surprises

December 29, 2015

What I find interesting is how data analysts, software developers, and other big data pushers are always saying things like “hidden insights await in data” or “your business will turn around with analytics.”  These people make it seem like it is big thing, when it is really the only logical outcome that could entail from employing new data analytics.  Marketing Land continues with this idea in the article, “Intentional Serendipity: How Marketing Analytics Trigger Curiosity Algorithms And Surprise Discoveries.”

Serendipitous actions take place at random and cannot be predicted, but the article proclaims with the greater amount of data available to marketers that serendipitous outcomes can be optimized.   Data shows interesting trends, including surprises that make sense but were never considered before the data brought them to our attention.

“Finding these kinds of data surprises requires a lot of sophisticated natural language processing and complex data science. And that data science becomes most useful when the patterns and possibilities they reveal incorporate the thinking of human beings, who contribute the two most important algorithms in the entire marketing analytics framework — the curiosity algorithm and the intuition algorithm.”

The curiosity algorithm is the simple process of triggering a person’s curious reflex, then the person can discern what patterns can lead to a meaningful discovery.  The intuition algorithm is basically trusting your gut and having the data to back up your faith.  Together these make explanatory analytics help people change outcomes based on data.

It follows up with a step-by-step plan about how to organize your approach to explanatory analytics, which is a basic business plan but it is helpful to get the process rolling.  In short, read your data and see if something new pops up.

Whitney Grace, December 29, 2015
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

 

Another Good Reason for Diversity in Tech

December 29, 2015

Just who decides what we see when we search? If we’re using Google, it’s a group of Google employees, of course. The Independent reports, “Google’s Search Results Aren’t as Unbiased as You Think—and a Lack of Diversity Could Be the Cause.” Writer Doug Bolton points to a TEDx talk by Swedish journalist Andreas Ekström, in which Ekström describes times Google has, and has not, counteracted campaigns to deliberately bump certain content. For example, the company did act to decouple racist imagery from searches for “Michelle Obama,” but did nothing to counter the association between a certain Norwegian murderer and dog poop. Boldon writes:

“Although different in motivation, the two campaigns worked in exactly the same way – but in the second, Google didn’t step in, and the inaccurate Breivik images stayed at the top of the search results for much longer. Few would argue that Google was wrong to end the Obama campaign or let the Breivik one run its course, but the two incidents shed light on the fact that behind such a large and faceless multi-billion dollar tech company as Google, there’s people deciding what we see when we search. And in a time when Google has such a poor record for gender and ethnic diversity and other companies struggle to address this imbalance (as IBM did when they attempted to get women into tech by encouraging them to ‘Hack a Hairdryer’), this fact becomes more pressing.”

The article notes that only 18 percent of Google’s tech staff worldwide are women, and that it is just two percent Hispanic and one percent black. Ekström’s talk has many asking what unperceived biases lurk in Google’s  algorithms, and some are calling  on the company anew to expand its hiring diversity. Naturally, though, any tech company can only do so much until more girls and minorities are encouraged to explore the sciences.

Cynthia Murrell, December 29, 2015

Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

How Multitasking Alters Our Brains

December 22, 2015

An article at Forbes, “Is Technology Making Us Dumb and Numb?” brings neuroscience to bear on the topic, and the conclusion is not pretty. Contributor Christine Comaford, who regularly writes about neuroscience in relation to leadership, tells us:

“Multitasking reduces gray matter density in the area of the brain called the Anterior Cingulate Cortex (ACC)…. The ACC is involved in a number of cognitive and emotional functions including reward anticipation, decision-making, empathy, impulse control, and emotion. It acts like a hub for processing and assigning control to other areas of the brain, based on whether the messages are cognitive (dorsal) or emotional (ventral). So when we have reduced gray matter density in the ACC due to high media multitasking, over time we see reduced ability to make sound decisions, to modulate our emotions, to have empathy and to connect emotionally to others.”

Hmm, is that why our national discourse has become so uncivil in recent years? See the article for a more detailed description of the ACC and the functionality of its parts. Maybe if we all kick the multitasking habit, the world will be a slightly kinder place.

Cynthia Murrell, December 22, 2015

 

Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

Google and Quantum Computing

December 14, 2015

I read “What is the Computational Value of Finite Range Tunneling?” The paper concerns a numerical recipe running on Google spiffy new D-Wave quantum computer. I also read “Google, D-Wave, and the Case of the Factor-10^8 Speedup for WHAT?” This paper points out that Google is making progress with the D-Wave quantum computer. How much progress is a matter for debate among the aficionados of quantum computing. If you are interested in the benchmark, the Google write up and the For What essay are both quite good.

When I worked through these documents, several thoughts crossed my mind, and I jotted down several. Here are two which will not require you, oh, gentle reader, to wade into the murky world of benchmarks and qubits:

1. IBM has a deal, which may be announced by now, to build high performance computer systems for the US government. My hunch is that IBM deserves a pat on its blue suited back for landing a contract to produce something tangible, unlike the Watson marketing hype-o-rama. To my knowledge, Google does not have this sort of deal. Google is writing about another company’s computer, not developing its own super systems. I think this is interesting. In the good old days prior to 2007, the Google was more of a doer. Now Google is a refiner.

2. Googlers have made the D-Wave perform. That’s good. The problem is that like the ORNL wonks using Crays to index health text, the computer is not one available to lots and lots of people. In fact, to verify the Googlers’ achievement, folks with Fancy Dan equipment have to replicate what the Googlers achieved. There will be lots of controversy. The Cray is a user friendly device compared to the D-Wave. Google seems to be working overtime to convince people that it is still a technology leader. I wrote about the gaggle of Googlers talking about Google’s artificial intelligence and machine learning achievements. My question is, “Is the Google feeling the heat from companies doing better in cutting edge technologies?”

Stephen E Arnold, December 14, 2015

Big Data: Some Obvious Issues

November 30, 2015

Imagine this. Information Week writes an article which does not mix up marketing jargon, cheerleading, and wild generalizations. No. It’s true. Es verdad.

Navigate to “Big Data & The Law Of Diminishing Returns.” The write up is a recycling of comments from and Ivory Tower type at Harvard University. Not enough real world experience? Never fear, the poobah is Cathy O’Neil, who worked in the private sector.

Here are her observations as presented by the estimable Information Week real journalists. Note: my observation appears in italics after the words of wisdom.

“The [Big Data] technology is encouraging people to use algorithms they don’t understand.” My question: How many information professionals got an A in math?

“Know what you don’t know. It’s hard.” My question: How does not know oneself if the self is trying to hit one’s numbers and work with concepts about which their information is skewed by Google boosted assumptions about one’s intelligence?

The write up includes this bummer of a statement to the point and click analytics wizards:

“I’d rather have five orthogonal modest data sets than one ginormous data set along a single axis…That is where the law of diminishing returns kicks in.” This is attributed to Caribou Hoenig, a venture capital firm. My question: What is ginormous?

The write up also reveals, without much in the way of questioning the analytic method, that IDC has calculated that the size of the Big Data market will be “$58.6 billion by the end of the year, and it would grow to $101.9 billion by 2019.”

Perhaps clear thinking about data begins with some thinking about where numbers come from, the validity of the data set, and the methods used to figure out the future.

Oh, right. That’s the point of the article. Too bad the write up ignores its own advice. I like that ginormous number in 2019. Yep, clear thinking about data abounds.

Stephen E Arnold, November 30, 2015

Inferences: Check Before You Assume the Outputs Are Accurate

November 23, 2015

Predictive software works really well as long as the software does not have to deal with horse races, the stock market, and the actions of single person and his closest pals.

Inferences from Backtest Results Are False Until Proven True” offers a useful reminder to those who want to depend on algorithms someone else set up. The notion is helpful when the data processed are unchecked, unfamiliar, or just assumed to be spot on.

The write up says:

the primary task of quantitative traders should be to prove specific backtest results worthless, rather than proving them useful.

What throws backtests off the track? The write up provides a useful list of reminders:

  1. Data-mining and data snooping bias
  2. Use of non tradable instruments
  3. Unrealistic accounting of frictional effects
  4. Use of the market close to enter positions instead of the more realistic open
  5. Use of dubious risk and money management methods
  6. Lack of effect on actual prices

The author is concerned about financial applications, but the advice may be helpful to those who just want to click a link, output a visualization, and assume the big spikes are really important to the decision you will influence in one hour.

One point I highlighted was:

Widely used strategies lose any edge they might have had in the past.

Degradation occurs just like the statistical drift in Bayesian based systems. Exciting if you make decisions on outputs known to be flawed. How is that automatic indexing, business intelligence, and predictive analytics systems working?

Stephen E Arnold, November 23, 2015

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta