Let the Tweets Lead Your Marketing, Come What May

September 14, 2017

It seems that sales and marketing departments just can’t keep up with consumer patterns and behaviors. The latest example of this is explained in a DMA article outlining how to utilize social media to reach target leads. As people rely more on their own search and online acumen and less on professionals (IRL), marketing has to adjust.

Aseem Badshah, Founder, and CEO of Socedo, explain the problem and a possible solution:

Traditionally, B2B marketers created content based on the products they want to promote. Now that so much of the B2B decision making process occurs online, content has to be more customer-centric. The current set of website analytics tools provide some insights, but only on the audience who have already reached your website. Intent data from social media can help you make your content more relevant. By analyzing social media signals and looking at which signals are picking up in volume over time, you can gain new insights into your audience that helps you create more relevant content.

While everything Badshah says may be true, one has to ask themselves, is following the masses always a good thing? If a business wants to maintain their integrity to their field would it be in their best interest to follow the lead of their target demographic’s hashtags or work harder at marketing their product/service despite the apparent twitter-provided disinterest?

Catherine Lamsfuss, September 14, 2017

IBM Cloud As a Rube Goldberg Machine

September 10, 2017

Navigate to AdAge. Select the IBM ad. Its title is “IBM Cloud: Cloud for Enterprise: Pinball.” I snapped this image from the video which seems to represent a pinball game. Does this look like a Rube Goldberg machine? I think so.

ibm rube

 

 

 

 

 

Stephen E Arnold, September 10, 2017

Smart Software: An AI Future and IBM Wants to Be There for 10 Years

September 7, 2017

I read “Executives Say AI Will Change Business, but Aren’t Doing Much about It.” My takeaway: There is no there there—yet. I noted these “true factoids” waltzing through the MIT-charged write up:

  • 20% of the 3,000 companies in the sample use smart software
  • 5% use smart software “extensively” (No, I don’t know what extensively means either.)
  • About one third of the companies in the sample “have an AI strategy in place.”

Pilgrims, that means there is money to be made in the smart software discontinuity. Consulting and coding are a match made in MBA heaven.

If my observation is accurate, IBM’s executives read the tea leaves and decided to contribute a modest $240 million for the IBM Watson Artificial Intelligence Lab at MIT. You can watch a video and read the story from Fortune Magazine at this link.

The Fortune “real” journalism outfit states:

This is the first time that a single company has underwritten an entire laboratory at the university.

However, the money will be paid out over 10 years. Lucky parents with children at MIT can look forward to undergrad, graduate, and post graduate work at the lab. No living in the basement for this cohort of wizards.

Several questions arise:

  1. Which institution will “own” the intellectual property of the wizards from MIT and IBM? What about the students’ contributions?
  2. How will US government research be allocated when there is a “new” lab which is funded by a single commercial enterprise? (Hello, MITRE, any thoughts?)
  3. Will young wizards who formulate a better idea be constrained? Might the presence or shadow of IBM choke off some lines of innovation until the sheepskin is handed over?
  4. Are Amazon, Facebook, Google, and Microsoft executives kicking themselves for not thinking up this bold marketing play and writing an even bigger check?
  5. Will IBM get a discount on space advertising in MIT’s subscription publications?

Worth monitoring because other big name schools might have a model to emulate? Company backed smart software labs might become the next big thing to pitch for some highly regarded, market oriented institutions. How much would Cambridge University or the stellar University of Louisville capture if they too “sold” labs to commercial enterprises? (Surprised at my inclusion of the University of Louisville? Don’t be. It’s an innovator in basketball recruiting and recruiting real estate mogul talent. Smart software is a piece of cake for this type of institution of higher learning.)

Stephen E Arnold

Old School Searcher Struggles with Organizing Information

September 7, 2017

I read a write up called “Semantic, Adaptive Search – Now that’s a Mouthful.” I cannot decide if the essay is intended to be humorous, plaintive, or factual. The main idea in the headline is that there is a type of search called “semantic” and “adaptive.” I think I know about the semantic notion. We just completed a six month analysis of syntactic and semantic technology for one of my few remaining clients. (I am semi retired as you may know, but tilting at the semantic and syntactic windmills is great fun.)

The semantic notion has inspired such experts as David Amerland, an enthusiastic proponent of the power of positive thinking and tireless self promotion, to heights of fame. The syntax idea gives experts in linguistics hope for lucrative employment opportunities. But most implementations of these hallowed “techniques” deliver massive computational overhead and outputs which require legions of expensive subject matter experts to keep on track.

The headline is one thing, but the write up is about another topic in my opinion. Here’s the passage I noted:

The basic problem with AI is no vendor is there yet.

Okay, maybe I did not correctly interpret “Semantic, Adaptive Search—Now That’s a Mouthful.” I just wasn’t expecting artificial intelligence, a very SEO type term.

But I was off base. The real subject of the write up seems to be captured in this passage:

I used to be organized, but somehow I lost that admirable trait. I blame it on information overload. Anyway, I now spend quite a bit of time searching for my blogs, white papers, and research, as I have no clue where I filed them. I have resorted to using multiple search criteria. Something I do, which is ridiculous, is repeat the same erroneous search request, because I know it’s there somewhere and the system must have misunderstood, right? So does the system learn from my mistakes, or learn the mistakes? Does anyone know?

Okay, disorganized. I would never have guessed without a title that references semantic and adaptive search, the lead paragraph about artificial intelligence, and this just cited bit of exposition which makes clear that the searcher cannot make the search systems divulge the needed information.

One factoid in the write up is that a searcher will use 2.73 terms per query. I think that number applies to desktop boat anchor searches from the Dark Ages of old school querying. Today, more than 55 percent of queries are from mobile devices. About 20 percent of those are voice based. Other queries just happen because a greater power like Google or Microsoft determines what you “really” wanted is just the ticket. To me, the shift from desktop to mobile makes the number of search terms in a query a tough number to calculate. How does one convert data automatically delivered to a Google Map when one is looking for a route with an old school query with 2.73 terms? Answer: You maybe just use whatever number pops out from a quick Bing or Google search from a laptop and go with the datum in a hit on an ad choked result list.

The confused state of search and content processing vendors is evident in their marketing, their reliance on jargon and mumbo jumbo, and fuzzy thinking about obtaining information to meet a specific information need.

I suppose there is hope. One can embrace a taxonomy and life will be good. On the other hand, disorganization does not bode well for a taxonomy created by a person who cannot locate information.

Well, one can use smart software to generate those terms, the Use Fors and the See Alsos. One can rely on massive amounts of Big Data to save the day. One can allow a busy user of SharePoint to assign terms to his or her content. Many good solutions which make information access a thrilling discipline.

Now where did I put that research for my latest book, “The Dark Web Notebook”? Ah, I know. In a folder called “DWNB Research” on my back up devices with hard copies in a banker’s box labeled “DWNB 2016-2017.”

Call me old fashioned but the semantic, syntactic, artificially intelligent razzmatazz underscores the triumph of jargon over systems and methods which deliver on point results in response to a query from a person who knows that for which he or she seeks.

Plus, I have some capable research librarians to keep me on track. Yep, real humans with MLS degrees, online research expertise, and honest-to-god reference desk experience.

Smart software and jargon requires more than disorganization and arm waving accompanied by toots from the jargon tuba.

Stephen E Arnold, September 7, 2017

A New and Improved Content Delivery System

September 7, 2017

Personalized content and delivery is the name of the game in PRWEB’s, “Flatirons Solutions Launches XML DITA Dynamic Content Delivery Solutions.”  Flatirons Solutions is a leading XML-based publishing and content management company and they recently released their Dynamic Content Delivery Solution.  The Dynamic Content Delivery Solution uses XML-based technology will allow enterprises to receive more personalized content.  It is advertised that it will reduce publishing and support costs.  The new solution is built with the Mark Logic Server.

By partnering with Mark Logic and incorporating their industry-leading XML content server, the solution conducts powerful queries, indexing, and personalization against large collections of DITA topics. For our clients, this provides immediate access to relevant information, while producing cost savings in technical support, and in content production, maintenance, review and publishing. So whether they are producing sales, marketing, technical, training or help documentation, clients can step up to a new level of content delivery while simultaneously improving their bottom line.

The Dynamic Content Delivery Solution is designed for government agencies and enterprises that publish XML content to various platforms and formats.  Mark Logic is touted as a powerful tool to pool content from different sources, repurpose it, and deliver it to different channels.

MarkLogic finds success in its core use case: slicing and dicing for publishing.  It is back to the basics for them.

Whitney Grace, September 7, 2017

 

IBM Watson Performance: Just an IBM Issue?

September 6, 2017

I read “IBM Pitched its Watson Supercomputer As a Revolution in Cancer Care. It’s Nowhere Close.” Here in Harrod’s Creek, doubts about IBM Watson are ever present. It was with some surprise that we learned:

But three years after IBM began selling Watson to recommend the best cancer treatments to doctors around the world, a STAT investigation has found that the supercomputer isn’t living up to the lofty expectations IBM created for it. It is still struggling with the basic step of learning about different forms of cancer. Only a few dozen hospitals have adopted the system, which is a long way from IBM’s goal of establishing dominance in a multibillion-dollar market. And at foreign hospitals, physicians complained its advice is biased toward American patients and methods of care.

The write up beats on the lame horse named Big Blue. I would wager that the horse does not like being whipped one bit. The write up ignores a problem shared by many “smart” software systems. Yep, even those from the wizards at Amazon, Facebook, Google, and Microsoft. That means there are many more stories to investigate and recount.

But I want more of the “why.” I have some hypotheses; for example:

Smart systems have to figure out information. Now on the surface, it seems as if Big Data can provide as much input as necessary. But that is a bit of a problem too. Information in its various forms is not immediately usable in its varied forms. Figuring out what information to use and then getting that information into a form which the smart software can process is expensive. The processes involved are also time consuming. Smart software needs nannies, and nannies which know their stuff. If you have ever tried to hire a nanny who fits into a specific family’s inner workings, you know that the finding of the “right” nanny is a complicated job in itself.

Let’s stop. I have not tackled the mechanism for getting smart software to “understand” what humans mean with their utterances. These outputs, by the way, are in the form of audio, video, and text. To get smart software to comprehend intent and then figure out what specific item of tagged information is needed to deal with that intent is a complex problem too.

IBM Watson, like other outfits trying to generate revenue by surfing a trend, has been tossed off its wave rider by a very large rogue swell: Riffing on a magic system is a lot easier than making that smart software do useful work in a real world environment.

Enterprise search vendors fell victim to this mismatch between verbiage and actually performing in dynamic conditions.

Wipe out. (I hear the Safaris’ “Wipe Out” in my mind. If you don’t know the song, click here.)

IBM Watson seems to be the victim of its own over inflated assertions.

My wish is for investigative reports to focus on case analyses. These articles can then discuss the reasons for user dissatisfaction, cost overruns, contract abandonments, and terminations (staff overhauls).

I want to know what specific subsystems and technical methods failed or cost so much that the customers bailed out.

As the write up points out:

But like a medical student, Watson is just learning to perform in the real world.

Human utterances and smart software. A work in progress but not for the tireless marketers and sales professionals who want to close a deal, pay the bills, and buy the new Apple phone.

Stephen E Arnold, September 6, 2017

Google and Information: Another Aberration or Genuine Insight??

September 1, 2017

I read “Yes, Google Uses Its Power to Quash Ideas It Doesn’t Like—I Know Because It Happened to Me.” How many “damore” of these allegations, misunderstandings, and misinterpretations will flow into my monitoring systems? It appears that a person who once labored for Forbes, the capitalist tool, is combining memory, the methods of Malcolm Gladwell, and a surfboard ride on the anti-Google wave.

The write up recounts this recollection of conversations with marketing and PR people, allegedly real, live Googlers:

I asked the Google people if I understood correctly: If a publisher didn’t put a +1 button on the page, its search results would suffer? The answer was yes. After the meeting, I approached Google’s public relations team as a reporter, told them I’d been in the meeting, and asked if I understood correctly. The press office confirmed it, though they preferred to say the Plus button “influences the ranking.” They didn’t deny what their sales people told me: If you don’t feature the +1 button, your stories will be harder to find with Google. With that, I published a story headlined, “Stick Google Plus Buttons On Your Pages, Or Your Search Traffic Suffers,” that included bits of conversation from the meeting.

If accurate, the method of determining search results runs counter to the information I presented in the Google Legacy,* which I wrote in 2003. In that monograph, I tallied about 100 “signals” that Google used to provide data to its objective algorithm for determining the importance of hits in a results list.

As part of my research for that monograph, I read patent documents stuffed with interesting discussions of what was wrong with certain approaches to search and retrieval issues. (You can find some juicy factoids in the discussion of the background of an invention. The pre-2007 Google patent documents strike me as more informative than Google’s most recent patent documents, but that’s just my opinion.) I recall that Google went to great lengths to explain the objectivity of the methods. I pointed out that judgment was involved in Google’s ranking methods because humans selected which numerical recipes to use and what threshold settings to use for certain procedures. In my lectures about the exploitable “holes” in the most common numerical recipes used by Google and other, the machine-based methods could be fiddled. But overall, the Google was making clear that automation for cost reduction and efficiency was more important than human editorial fiddling.

If the statement extracted from the Gizmodo write up is accurate, Google seems to have machine-based methods, but these can be used by humans to add the lieutenant’s favorite foods to the unit’s backpacks.

The Gizmodo article reveals:

Google never challenged the accuracy of the reporting. Instead, a Google spokesperson told me that I needed to unpublish the story because the meeting had been confidential, and the information discussed there had been subject to a non-disclosure agreement between Google and Forbes. (I had signed no such agreement, hadn’t been told the meeting was confidential, and had identified myself as a journalist.) It escalated quickly from there. I was told by my higher-ups at Forbes that Google representatives called them saying that the article was problematic and had to come down. The implication was that it might have consequences for Forbes, a troubling possibility given how much traffic came through Google searches and Google News.

With this non algorithmic interaction, the Gizmodo story depicts Google as a frisky outfit indeed. The objective system can be punitive. Really?

When I step back from this bit of “real” reporting, enlivened with the immediacy of an anecdote which seems plausible, I am thinking about the disconnect between my analysis is the Google Legacy and the events in the Gizmodo story.

Several questions arise:

  1. If the story is accurate, how “correct” are other articles about Google? Perhaps Google influenced many stories so that the person doing research is working with a stacked deck?
  2. If I assume that my research was correct in the 2002 to 2003 period when I was actively compiling data for the Google Legacy, what has caused this “objective method” to morph into a tool suitable for intimidation? If the shift did happen, what management actions at Google allowed objective methods to relax their grip?
  3. Why, after 20 years, are “real” news organizations now running stories about Google’s power, its machinations, and the collateral damage from Google employees who are far removed from the messy cubicles and Foosball games among Google’s elite engineers? Hey, those smart people were the story. Now it is the behavior of sales and public relations types who are making news? What’s this say about “news”? What’s this say about Google?

My hunch is that a large, 20 year old company is very different from the outfit that hired folks from AltaVista, refugees from Bell Labs, and assorted wizards whose life’s work was of interest to 50 people at an ACM special interest group.

Perhaps the problem is a result of Google’s adding people with degrees in art history and political science? There may even be one or two failed middle school teachers among Google’s non technical staff. Imagine. Liberal arts or education majors in Google satellite offices. I can conjure a staff meeting which involves presentations with low contrast slides, not the wonky drawings that Jeff Dean once favored in his lectures about Big Table.

Google’s  staffing has shifted over the years from 99 percent engineers and scientists to a more “balanced” blend of smart people. (I don’t want to say “watered down”, however.) One possibility is that these “stories” about the Google’s alleged punitive actions may be less about the Google technical system and methods and more about what happens when hiring policies change and the firms’ technical past is lost in the haze of success.

Could Google’s sales, marketing, and PR professionals, not the engineers and scientists, are the problem? The fix is easy. More math, more algorithms, more smart software. Does Google need staff who can be easily be categorized as “overhead”? I want to think about this question.

Stephen E Arnold, September 1, 2017

* If you want a pre publication copy of the Google Legacy from 2003, just write benkent2020 at yahoo dot com. Something can be worked out. Yes, this monograph still sells, just slowly.

Decoding IBM Watson

August 14, 2017

IBM Watson is one of the leading programs in natural language processing. However, apart from understanding human interactions, Watson can do much more.

TechRepublic in an article titled IBM Watson: The Smart Person’s Guide says:

IBM Watson’s cognitive and analytical capabilities enable it to respond to human speech, process vast stores of data, and return answers to questions that companies could never solve before.

Named after founding father of IBM, Thomas Watson, the program is already part of several organizations. Multi-million dollar setup fee, however, is a stumbling block for most companies who want to utilize the potential of Watson.

Watson though operates in seven different verticals, it also been customized for specialties like cyber security. After impacting IT and related industries, Watson slowly is making inroads into industries like legal, customer service and human resources, which comfortably can be said are on the verge of disruption.

Vishal Ingole, August 14, 2017

After Voice, Visual Search Is next Frontier for Search

August 9, 2017

From text to voice, search business has come a long way. If Pinterest co-founder is to be believed, the future of search is visual.

In an interview to BBC Correspondent and published as the video titled Pinterest Co-Founder Says Photos Hold the Future of Search, co-founder Evan Sharp says:

There are billions of ideas on Pinterest and users search an equal number of them on Pinterest. Our primary source of revenue is advertising wherein we help business promote their products and services through Pins

There might be some substance to what Sharp is saying. Google recently revealed Google Lens and Google Deep Dream. While Google Lens helps users to identify and search objects around them, Deep Dream is a creative tool used for creating composite images using various sources. The intent is to encourage users to use visual tools that the company is building.

VR and AR are the buzzwords now and soon marketers will be placing virtual ads within these visual mediums to promote their products. Though Google Goggles failed to take off, it was probably because the product was ahead of its time. How about a second take now?

Vishal Ingole, August 9, 2017

Ask Me Anything by Google

August 7, 2017

In a recently released report by Google, the search engine giant says that out of billions of queries searched by its users, around 15% are unique or new queries.

Quartz in an article titled However Strange Your Search, Chances Are Google Has Seen It Before says:

His research shows that people turn to Google to learn about things prohibited by social norms: racist memes, self-induced abortions, and sexual fetishes of all kinds. In India, for example, the most popular query beginning “my husband wants…” is “…me to breastfeed him.

Google has become synonymous with the search for any kind of information, service or product all over the planet. Websites that can cater to audience demand for information thus have an opportunity to capitalize on this opportunity and monetize their websites.

A recent report suggests that SEO, the core of digital marketing is a $90 billion industry and soon will surpass $150 billion in revenues by 2020. It’s thus an excellent opportunity for anyone with niche audience to monetize the idea.

Vishal Ingole, August 7, 2017

Next Page »

  • Archives

  • Recent Posts

  • Meta