January 16, 2016
I read a darned amazing write up in a marketing blog. First, the story the marketing blog turned into “real news” is a sponsored study. That means an ad. But, even more interesting, the source of the funded study is a mid tier consulting firm. Now you know there are blue chip consulting outfits. I used to work at one and have done consulting projects for other blue chip outfits over the last 40 years. The blue chip outfits are more subtle in their thought leadership, which is one reason why there are blue chip outfits sitting on top of a pile of azure chips and gray chip vendors of expertise.
The second point is that the sponsored study conveniently converted into “real news” is that revenue comes from predicative analytics. Excuse me. But if a company is paid to flog an ad messages, doesn’t that mean the revenue comes from advertising or, in this case, clumsy propaganda. If the predictive analytics thing actually worked revenue wonders, wouldn’t the mid tier consulting firm use predictive analytics to generate cash? Wouldn’t the marketing newsletter use predictive analytics to generate cash?
To see this sponsored content daisy chain in action, navigate to “Forrester Report: Companies Using Predictive Analytics Make More Money.” The mid tier outfit in question is Forrester. Is their logo azure tinted? If not, maybe that is a color to consider. None of the stately expensive tie colors required.
The publication recycling the sponsored content as “real” news is Marketing Land. The name says it all, gentle reader.
What is the argument advanced for EverString by Forrester and Marketing Land?
Here’s the biggie:
The big takeaway: “Predictive marketing analytics use correlates with better business results and metrics.”
That is, compared with those in the survey who do not use predictive analytics (which it calls Retrospective Marketers). “Predictive Marketers,” the report notes, “are 2.9x more likely to report revenue growth at rates higher than the industry average.” They are also 2.1 times more likely to “occupy a commanding leadership position in the product/service markets they serve” and 1.8 times more likely to “consistently exceed goals when measuring the value their marketing organizations contribute to the business,” compared to the Retrospective Marketers in the survey. Forrester analyst Laura Ramos, who was involved in the report, told me the main point is clear: “Predictive analytics pays off.”
What froth? The 2.9x suggests real analysis. Sure, sure, I know about waves and magic squares.
There are companies delivering predictive analytics. Some of these outfits have been around for decades. Some of the methods have been known for centuries. I won’t remind you, gentle reader, about my wonky relative and his work for the stats guy Kolmogorov.
Suffice it to say that EverString paid Forrester. Forrester directly or indirectly smiled at Marketing Land. The reader learns that predictive analytics generate revenue.
Nope, the money comes from selling ads and, I assume, “influence.”
Put that in your algorithm and decide which is better: Selling ads or figuring out how to construct a predictive numerical recipe?
Right. Mid tier firms go the ad route. The folks recycling ads as news grab a ride on the propaganda unicycle.
Stephen E Arnold, January 16, 2016
January 15, 2016
The year had barely started and it looks lime we already have a new buzzword to nestle into our ears: big algorithms. The term algorithm has been tossed around with big data as one of the driving forces behind powerful analytics. Big data is an encompassing term that refers to privacy, security, search, analytics, organization, and more. The real power, however, lies in the algorithms. Benchtec posted the article, “Forget Big Data-It’s Time For Big Algorithms” to explain how algorithms are stealing the scene.
Data is useless unless you are able to are pull something out of it. The only way get the meat off the bone is to use algorithms. Algorithms might be the powerhouses behind big data, but they are not unique. The individual data belonging to different companies.
“However, not everyone agrees that we’ve entered some kind of age of the algorithm. Today competitive advantage is built on data, not algorithms or technology. The same ideas and tools that are available to, say, Google are freely available to everyone via open source projects like Hadoop or Google’s own TensorFlow…infrastructure can be rented by the minute, and rather inexpensively, by any company in the world. But there is one difference. Google’s data is theirs alone.”
Algorithms are ingrained in our daily lives from the apps run on smartphones to how retailers gather consumer detail. Algorithms are a massive untapped market the article says. One algorithm can be manipulated and implemented for different fields. The article, however, ends on some socially conscious message about using algorithms for good not evil. It is a good sentiment, but kind of forced here, but it does spur some thoughts about how algorithms can be used to study issues related to global epidemics, war, disease, food shortages, and the environment.
January 12, 2016
I spoke with a person who asked me, “Have you seen the 2013 Dave Amerland video? The video in question is “Google Semantic Search and its Impact on Business.”
I hadn’t. I watched the five-minute video and formed some impressions / opinions about the information presented. Now I wish I had not invested five minutes in serial content processing.
First, the premise is that search is marketing does not match up with my view of search. In short, search is more than marketing, although some view search as essential to making a sale.
Second, the video generates buzzwords. There’s knowledge graph, semantic, reputation, Big Data, and more. If one accepts the premise that search is about sales, I am not sure what these buzzwords contribute. The message is that when a user looks for something, the system should display a message that causes a sale. Objectivity does not have much to do with this, nor do buzzwords.
Third, presentation of the information was difficult for me to understand. My attention was undermined by the wild and wonderful assertions about the buzzwords. I struggled with “from stings to things, from Web sites to people.” What?
The video is ostensibly about the use of “semantics” in content. I am okay with semantic processes. I understand that keeping words and metaphors consistent are helpful to a human and to a Web indexing system.
But the premise. I have a tough time buying in. I want search to return high value, on point content. I want those who create content to include helpful information, details about sources, and markers that make it possible for a reader to figure out what’s sort of accurate and what’s opinion.
I fear that the semantics practiced in this video shriek, “Hire me.” I also note that the video is a commercial for a book which presumably amplifies the viewpoint expressed in the video. That means the video vocalizes, “Buy my book.”
Heck, I am happy if I can an on point result set when I run a query. No shrieking. No vocalization. No buzzwords. Will objective search be possible?
Stephen E Arnold, January 12, 2016
January 9, 2016
I read another bit of IBM Watson public relations’ fluff. The story was “CES 2016: IBM Announces Watson as a Personal Fitness Coach.” I assume that IBM Watson’s ability to craft recipes with tamarind will go from the kitchen to the gym with aplomb.
According to the article:
The news signals the rapid adoption of Watson technology by consumers and to illustrate this, announced that Under Armour and IBM have developed a new cognitive coaching system. Watson, will serve as a personal health consultant, fitness trainer and assistant by providing athletes with timely, evidence-based coaching about health and fitness-related issues. Where Watson differs from other systems is that it determines outcomes achieved based on others “like you.” It integrates IBM Watson’s technology with the data from Under Armour’s Connected Fitness community – a vast digital health and fitness community of more than 160 million members.
I hope that IBM lifts the weight from the shoulders of IBM stakeholders who want to be buoyed on a rush of new revenues. Vast too. A consumer product?
Stephen E Arnold, January 9, 2016
January 7, 2016
The Alphabet Google thing is getting more focused in its quest for revenue in the post desktop search world. I read “Google Is Tracking Students As It Sells More Products to Schools, Privacy Advocates Warn.” I remember the good old days when the Google was visiting universities to chat about its indexing of the institutions’ Web sites and the presentations related to the book scanning project. This write up seems, if Jeff Bezos’ newspaper is spot on, to suggest that the Alphabet Google thing is getting more interested in students, not just the institutions.
More than half of K-12 laptops or tablets purchased by U.S. schools in the third quarter were Chromebooks, cheap laptops that run Google software…. But Google is also tracking what those students are doing on its services and using some of that information to sell targeted ads, according to a complaint filed with federal officials by a leading privacy advocacy group.
The write up points out:
In just a few short years, Google has become a dominant force as a provider of education technology…. Google’s fast rise has partly been because of low costs: Chromebooks can often be bought in the $100 to $200 range, a fraction of the price for a MacBook. And its software is free to schools.
Low prices. Well, Amazon is into that type of marketing too, right? Collecting data. Isn’t Amazon gathering data for its recommendations service?
My reaction to the write up is that the newspaper will have more revelations about the Alphabet Google thing. The security and privacy issue is one that has the potential to create some excitement in the land of online giants.
Stephen E Arnold, January 7, 2015
January 5, 2016
At this point in the Big Data sensation, many businesses are swimming in data without the means to leverage it effectively. TechWeek Europe cites a recent survey from storage provider Pure Storage in its write-up, “Big Data ‘Fails Businesses’ Due to Access, Skills Shortage.” Interestingly, most of the problems seem to have more to do with human procedures and short-sightedness than any technical shortcomings. Writer Tom Jowitt lists the three top obstacles as a lack of skilled workers, limited access to information, and bureaucracy. He tells us:
“So what exactly is going wrong with Big Data to be causing such problems? Well over half (56 percent) of respondents said bureaucratic red tape was the most serious obstacle for business productivity. ‘Bureaucratic red tape around access to information is preventing companies from using their data to find those unique pieces of insight that lead to great ideas,’ said [Pure Storage’s James] Petter. ‘Data ownership is no longer just the remit of the CIO, the democratisation of insight across businesses enables them to disrupt the competition.’ But regulations are also causing worry, with one in ten of the companies citing data protection concerns as holding up their dissemination of information and data throughout their business. The upcoming EU General Data Protection Regulation will soon affect every single company that stores data.”
The survey reports that missed opportunities have cost businesses billions of pounds per year, and almost three-quarters of respondents say their organizations collect data that is just collecting dust. Both cost and time are reasons that information remains unprocessed. On the other hand, Jowitt points to another survey by CA Technologies; most of its respondents expect the situation to improve, and for their data collections to bring profits down the road. Let us hope they are correct.
Cynthia Murrell, January 5, 2016
January 2, 2016
I want to start off the New Year with look at Watson in the real world. My real world is circumscribed by abandoned coal mines and hollows in rural Kentucky. I am pretty sure this real world is not the real world assumed in “IBM Watson: AI for the Real World.” IBM has tapped Bob Dylan, a TV game show, and odd duck quasi chemical symbols to communicate the importance of search and content processing.
The write up takes a different approach. In fact, the article begins with an interesting comment:
Computers are stupid.
There you go. A snazzy one liner.
The purpose of the reminder that a man made device is not quite the same as one’s faithful boxer dog or next door neighbor’s teen is startling.
The article summarizes an interview with a Watson wizard, Steven Abrams, director of technology for the Watson Ecosystem. This is one of those PR inspired outputs which I quite enjoy.
The write up quotes Abrams as saying:
“You debug Watson’s system by asking, ‘Did we give it the right data?'” Abrams said. “Is the data and experience complete enough?”
Okay, but isn’t this Dr. Mike Lynch’s approach. Lynch, as you may recall, was the Cambridge University wizard who was among the first to commercialize “learning” systems in the 1990s.
According to the write up:
Developers will have data sets they can “feed” Watson through one of over 30 APIs. Some of them are based on XML or JSON. Developers familiar with those formats will know how to interact with Watson, he [Abrams] explained.
As those who have used the 25 year old Autonomy IDOL system know, preparing the training data takes a bit of effort. Then as the content from current content is fed into the Autonomy IDOL system, the humans have to keep an eye on the indexing. Ignore the system too long, and the indexing “drifts”; that is, the learned content is not in tune with the current content processed by the system. Sure, algorithms attempt to keep the calibrations precise, but there is that annoying and inevitable “drift.”
IBM’s system, which strikes me as a modification of the Autonomy IDOL approach with a touch of Palantir analytics stirred in is likely to be one expensive puppy to groom for the dog show ring.
The article profiles the efforts of a couple of IBM “partners” to make Watson useful for the “real” world. But the snip I circled in IBM red-ink red was this one:
But Watson should not be mistaken for HAL. “Watson will not initiate conduct on its own,” IBM’s Abrams pointed out. “Watson does not have ambition. It has no objective to respond outside a query.” “With no individual initiative, it has no way of going out of control,” he continued. “Watson has a plug,” he quipped. It can be disconnected. “Watson is not going to be applied without individual judgment … The final decision in any Watson solution … will always be [made by] a human, being based on information they got from Watson.”
My hunch is that Watson will require considerable human attention. But it may perform best on a TV show or in a motion picture where post production can smooth out the rough edges.
Maybe entertainment is “real”, not the world of a Harrod’s Creek hollow.
Stephen E Arnold, January 2, 2016
December 30, 2015
After Christmas, comes New Year’s Eve and news outlets take the time to reflect on the changes in the past year. Usually they focus on celebrities who died, headlining news stories, technology advancements, and new scientific discoveries. One of the geeky news outlets on the Internet is Gizmodo and they took their shot at highlighting things that happened in 2015, but rather than focusing on new advances they check off “The Most Overhyped Scientific Discoveries In 2015.”
There was extreme hype about an alien megastructure in outer space that Neil deGrasse Tyson had to address and tell folks they were overreacting. Bacon and other processed meats were labeled as carcinogens and caused cancer! The media, of course, took the bacon link and ran with it causing extreme panic, but in the long run everything causes cancer from cellphones to sugar.
Global warming is a hot topic that always draws arguments and it appears to be getting worse the more humans release carbon dioxide into the atmosphere. Humans are always ready for a quick solution and a little ice age would rescue Earth. It would be brought on by diminishing solar activity, but it turns out carbon dioxide pollution does more damage than solar viability can fix. Another story involved the nearly indestructible tardigrades and the possibility of horizontal gene transfer, but a dispute between two rival labs about research on tardigrades ruined further research to understanding the unique creature.
The biggest overblown scientific discovery, in our opinion, is NASA’s warp drive. Humans are desperate for breakthroughs in space travel, so we can blast off to Titan’s beaches for a day and then come home within our normal Earth time. NASA experimented with an EM Drive:
“Apparently, the engineers working on the EM Drive decided to address some of the skeptic’s concerns head-on this year, by re-running their experiments in a closed vacuum to ensure the thrust they were measuring wasn’t caused by environmental noise. And it so happens, new EM Drive tests in noise-free conditions failed to falsify the original results. That is, the researchers had apparently produced a minuscule amount of thrust without any propellant.
Once again, media reports made it sound like NASA was on the brink of unveiling an intergalactic transport system.”
NASA might be working on warp drive prototype, but the science is based on short-term experiments, none of it has been peer reviewed, and NASA has not claimed that the engine even works.
The media takes the idea snippets and transforms them into overblown news pieces that are based more on junk science than real scientific investigation.
December 29, 2015
I read an unusual chunk of content marketing for IBM’s supercomputer. As you may know, IBM captured a US government project for supercomputers. I am not sure if IBM is in the quantum computing hunt, but I assume the IBM marketing folks will make this clear as the PR machine grinds forward in 2016.
The article on my radar is the link baity “Scientists Discover Oldest Words in the English Language, Predict Which Ones Are Likely to Disappear.”
First, the supercomputer rah rah from a university in the UK:
The IBM supercomputer at the University of Reading, known as ThamesBlue, is now one year old. Before it arrived, it took an average of six weeks to perform a computational task such as comparing two sets of words in different languages, now these same tasks can be executed in a few hours. Professor Vassil Alexandrov, the University’s leading expert on computational science and director of the University’s ACET Centre¹ said: “The new IBM supercomputer has allowed the University of Reading to push to the forefront of the research community. It underpins other important research at the university, including the development of accurate predictive models for environmental use. Based on weather patterns and the amounts of pollutant in the atmosphere, our scientists have been able to pinpoint likely country-by-country environmental impacts, such as the affect airborne chemicals will have on future crop yields and cross-border pollution”.
There you go. Testimony. Look at the wonderful use case for the IBM supercomputer: Environmental impact analyses.
Now back to the language research. It seems to me that the academic research scientists are comparing word lists. The concept seems very Watson like even though I did not spot a reference to IBM’s much hyped smart system.
The less frequently a word is used, the greater the likelihood that word will be forgotten, disused, or tossed in the dictionary writer’s dust bin. Examples of words in trouble are:
I would suggest that IBM’s marketing corpus from the foundation of the company as a vendor of tabulating equipment right up to the PurePower name be analyzed. Well, I am no academic, and I am not sure that the University of Reading would win a popularity contest at IBM after predicting which of its product names will fall into disuse in the future. (I sure would like to see the analysis for Watson, however.)
My thought is that frequency of use analyses are useful. A fast computer is helpful. I am not sure about the embedded IBM commercial in the write up.
Stephen E Arnold, December 28. 2015
December 29, 2015
What I find interesting is how data analysts, software developers, and other big data pushers are always saying things like “hidden insights await in data” or “your business will turn around with analytics.” These people make it seem like it is big thing, when it is really the only logical outcome that could entail from employing new data analytics. Marketing Land continues with this idea in the article, “Intentional Serendipity: How Marketing Analytics Trigger Curiosity Algorithms And Surprise Discoveries.”
Serendipitous actions take place at random and cannot be predicted, but the article proclaims with the greater amount of data available to marketers that serendipitous outcomes can be optimized. Data shows interesting trends, including surprises that make sense but were never considered before the data brought them to our attention.
“Finding these kinds of data surprises requires a lot of sophisticated natural language processing and complex data science. And that data science becomes most useful when the patterns and possibilities they reveal incorporate the thinking of human beings, who contribute the two most important algorithms in the entire marketing analytics framework — the curiosity algorithm and the intuition algorithm.”
The curiosity algorithm is the simple process of triggering a person’s curious reflex, then the person can discern what patterns can lead to a meaningful discovery. The intuition algorithm is basically trusting your gut and having the data to back up your faith. Together these make explanatory analytics help people change outcomes based on data.
It follows up with a step-by-step plan about how to organize your approach to explanatory analytics, which is a basic business plan but it is helpful to get the process rolling. In short, read your data and see if something new pops up.