Traveling Content: What? No Border Control?

November 25, 2017

I read “Understanding the Content Journey.” Frankly I was left with a cold fish on my keyboard. I shoved the dead thing aside after I learned:

The next major disruption for marketers will be in the form of embedded machine learning capabilities that augment and automate the content journey — making content more intelligent.

Okay, marketers, how are you going to make content smarter, more intelligent. Indexing, manual tags, plugging into the IBM Watson smart thing, or following the precepts of search engine optimization.

Intelligent content comes from intelligent people. Machines can and do write about sports scores, financial reports, and other information which lends itself to relatively error free parsing.

None of these issues struck me as germane to the “content journey.” What I learned was that intelligent content has several facets; for instance:

  1. Content ideation and search. What is content ideation? Search is a buzzword which is less clear than words like “mother” and “semantics.” (At least for “mother”, everyone has one. For semantics, I am not sure marketers have the answer.
  2. Content creation. I think this means writing. Most writing is just okay. Most college students once received average grades. Today, everyone gets a blue ribbon. Unfortunately writing remains difficult for many. I assume that content creation is different and, therefore, easier. One needs “content ideation” and Bing or Google.
  3. Content management. Frankly I have zero idea what content management means. The organizations with which I am familiar often have one or maybe multiple content management systems. In my experience, these are expensive beasties, and they, like enterprise search, generate considerable user hostility. The idea is to slap a slice and dice system on top of whatever marketers “write” and reuse that content for many purposes. Each purpose requires less and less of the “writing” function I believe.
  4. Content personalization. Ah, ha. Now I think I understand. A person needs an answer. A customer facing online support system will answer the person’s questions with no humans involved. That’s a close cousin to Facebook and Google keeping track of what a user does and then using that behavior to deliver “more like that.” Yes, that’s true “content ideation.” Reduce costs and reinforce what the user believes is accurate.
  5. Content delivery. That’s easy for me to understand. One uses social media or search engines to get the fruits of “content ideation” to a user. The only hitch is that free mechanisms are not reliable. The solution, from my perspective, is to buy ads. Facebook, Google, and other online ad mechanisms match the words from the “content ideation” with what the systems perceive is the user’s information need. Yep, that works well for research, fact checking, and analyzing a particular issue.
  6. Content performance. Now we come to metrics, which means either clicks or sales. At this point we are quite far from “content ideation” because the main point of this write up is that one only writes what produces clicks or sales. Tough luck, Nietzsche.

Net net: I am not sure if this write up would have received a passing grade from my first English 101 professor, a wacky crank named Dr. Pearce. For me, “content ideation” is more than making up a listicle of buzzwords.

But what about the journey? Well, that trope was abandoned because silliness rarely gets from Point A to Point B.

Pretty remarkable analysis even in our era of fake news, made up facts, specious analysis, and lax border controls.

Stephen E Arnold, November 25, 2017

Google Relevance: A Light Bulb Flickers

November 20, 2017

The Wall Street Journal published “Google Has Chosen an Answer for You. It’s Often Wrong” on November 17, 2017. The story is online, but you have to pay money to read it. I gave up on the WSJ’s online service years ago because at each renewal cycle, the WSJ kills my account. Pretty annoying because the pivot of the WSJ write up about Google implies that Google does not do information the way “real” news organizations do. Google does not annoy me the way “real” news outfits handle their online services.

For me, the WSJ is a collection of folks who find themselves looking at the exhaust pipes of the Google Hellcat. A source for a story like “Google Has Chosen an Answer for You. It’s Often Wrong” is a search engine optimization expert. Now that’s a source of relevance expertise! Another useful source are the terse posts by Googlers authorized to write vapid, cheery comments in Google’s “official” blogs. The guts of Google’s technology is described in wonky technical papers, the background and claims sections of the Google’s patent documents, and systematic queries run against Google’s multiple content indexes over time. A few random queries does not reveal the shape of the Googzilla in my experience. Toss in a lack of understanding about how Google’s algorithms work and their baked in biases, and you get a write up that slips on a banana peel of the imperative to generate advertising revenue.

I found the write up interesting for three reasons:

  1. Unusual topic. Real journalists rarely address the question of relevance in ad-supported online services from a solid knowledge base. But today everyone is an expert in search. Just ask any millennial, please. Jonathan Edwards had less conviction about his beliefs than a person skilled in the use of locating a pizza joint on a Google Map.
  2. SEO is an authority. SEO (search engine optimization) experts have done more to undermine relevance in online than any other group. The one exception are the teams who have to find ways to generate clicks from advertisers who want to shove money into the Google slot machine in the hopes of an online traffic pay day. Using SEO experts’ data as evidence grinds against my belief that old fashioned virtues like editorial policies, selectivity, comprehensive indexing, and a bear hug applied to precision and recall calculations are helpful when discussing relevance, accuracy, and provenance.
  3. You don’t know what you don’t know. The presentation of the problems of converting a query into a correct answer reminds me of the many discussions I have had over the years with search engine developers. Natural language processing is tricky. Don’t believe me. Grab your copy of Gramatica didactica del espanol and check out the “rules” for el complemento circunstancial. Online systems struggle with what seems obvious to a reasonably informed human, but toss in multiple languages for automated question answer, and “Houston, we have a problem” echoes.

I urge you to read the original WSJ article yourself. You decide how bad the situation is at ad-supported online search services, big time “real” news organizations, and among clueless users who believe that what’s online is, by golly, the truth dusted in accuracy and frosted with rightness.

Humans often take the path of least resistance; therefore, performing high school term paper research is a task left to an ad supported online search system. “Hey, the game is on, and I have to check my Facebook” takes precedence over analytic thought. But there is a free lunch, right?

Image result for there is no free lunch

In my opinion, this particular article fits in the category of dead tree media envy. I find it amusing that the WSJ is irritated that Google search results may not be relevant or accurate. There’s 20 years of search evolution under Googzilla’s scales, gentle reader. The good old days of the juiced up CLEVER methods and Backrub’s old fashioned ideas about relevance are long gone.

I spoke with one of the earlier Googlers in 1999 at a now defunct (thank goodness) search engine conference. As I recall, that confident and young Google wizard told me in a supercilious way that truncation was “something Google would never do.”

What? Huh?

Guess what? Google introduced truncation because it was a required method to deliver features like classification of content. Mr. Page’s comment to me in 1999 and the subsequent embrace of truncation makes clear that Google was willing to make changes to increase its ability to capture the clicks of users. Kicking truncation to the curb and then digging through the gutter trash told me two things: [a] Google could change its mind for the sake of expediency prior to its IPO and [b] Google could say one thing and happily do another.

I thought that Google would sail into accuracy and relevance storms almost 20 years ago. Today Googzilla may be facing its own Ice Age. Articles like the one in the WSJ are just belated harbingers of push back against a commercial company that now has to conform to “standards” for accuracy, comprehensiveness, and relevance.

Hey, Google sells ads. Algorithmic methods refined over the last two decades make that process slick and useful. Selling ads does not pivot on investing money in identifying valid sources and the provenance of “facts.” Not even the WSJ article probes too deeply into the SEO experts’ assertions and survey data.

I assume I should be pleased that the WSJ has finally realized that algorithms integrated with online advertising generate a number of problematic issues for those concerned with factual and verifiable responses.

Read more

Google Search and Hot News: Sensitivity and Relevance

November 10, 2017

I read “Google Is Surfacing Texas Shooter Misinformation in Search Results — Thanks Also to Twitter.” What struck me about the article was the headline; specifically, the implication for me was that Google was not responding to user queries. Google is actively “surfacing” or fetching and displaying information about the event. Twitter is also involved. I don’t think of Twitter as much more than a party line. One can look up keywords or see a stream of content containing a keyword or a, to use Twitter speak, “hash tags.”

The write up explains:

Users of Google’s search engine who conduct internet searches for queries such as “who is Devin Patrick Kelley?” — or just do a simple search for his name — can be exposed to tweets claiming the shooter was a Muslim convert; or a member of Antifa; or a Democrat supporter…

I think I understand. A user inputs a term and Google’s system matches the user’s query to the content in the Google index. Google maintains many indexes, despite its assertion that it is a “universal search engine.” One has to search across different Google services and their indexes to build up a mosaic of what Google has indexed about a topic; for example, blogs, news, the general index, maps, finance, etc.

Developing a composite view of what Google has indexed takes time and patience. The results may vary depending on whether the user is logged in, searching from a particular geographic location, or has enabled or disabled certain behind the scenes functions for the Google system.

The write up contains this statement:

Safe to say, the algorithmic architecture that underpins so much of the content internet users are exposed to via tech giants’ mega platforms continues to enable lies to run far faster than truth online by favoring flaming nonsense (and/or flagrant calumny) over more robustly sourced information.

From my point of view, the ability to figure out what influences Google’s search results requires significant effort, numerous test queries, and recognition that Google search now balances on two pogo sticks. Once “pogo stick” is blunt force keyword search. When content is indexed, terms are plucked from source documents. The system may or may not assign additional index terms to the document; for example, geographic or time stamps.

The other “pogo stick” is discovery and assignment of metadata. I have explained some of the optional tags which Google may or may not include when processing a content object; for example, see the work of Dr. Alon Halevy and Dr. Ramanathan Guha.

But Google, like other smart content processing today, has a certain sensitivity. This means that streams of content processed may contain certain keywords.

When “news” takes place, the flood of content allows smart indexing systems to identify a “hot topic.” The test queries we ran for my monographs “The Google Legacy” and “Google Version 2.0” suggest that Google is sensitive to certain “triggers” in content. Feedback can be useful; it can also cause smart software to wobble a bit.

Image result for the impossible takes a little longer

T shirts are easy; search is hard.

I believe that the challenge Google faces is similar to the problem Bing and Yandex are exploring as well; that is, certain numerical recipes can over react to certain inputs. These over reactions may increase the difficulty of determining what content object is “correct,” “factual,” or “verifiable.”

Expecting a free search system, regardless of its owner, to know what’s true and what’s false is understandable. In my opinion, making this type of determination with today’s technology, system limitations, and content analysis methods is impossible.

In short, the burden of figuring out what’s right and what’s not correct falls on the user, not exclusively on the search engine. Users, on the other hand, may not want the “objective” reality. Search vendors want traffic and want to generate revenue. Algorithms want nothing.

Mix these three elements and one takes a step closer to understanding that search and retrieval is not the slam dunk some folks would have me believe. In fact, the sensitivity of content processing systems to comparatively small inputs requires more discussion. Perhaps that type of information will come out of discussions about how best to deal with fake news and related topics in the context of today’s information retrieval environment.

Free search? Think about that too.

Stephen E Arnold, November 10, 2017

Twitch Incorporates ClipMine Discovery Tools

September 18, 2017

Gameplay-streaming site Twitch has adapted the platform of their acquisition ClipMine, originally developed for adding annotations to online videos, into a metadata-generator for its users. (Twitch is owned by Amazon.) TechCrunch reports the development in, “Twitch Acquired Video Indexing Platform ClipMine to Power New Discovery Features.” Writer Sarah Perez tells us:

The startup’s technology is now being put to use to translate visual information in videos – like objects, text, logos and scenes – into metadata that can help people more easily find the streams they want to watch. Launched back in 2015, ClipMine had originally introduced a platform designed for crowdsourced tagging and annotations. The idea then was to offer a technology that could sit over top videos on the web – like those on YouTube, Vimeo or DailyMotion – that allowed users to add their own annotations. This, in turn, would help other viewers find the part of the video they wanted to watch, while also helping video publishers learn more about which sections were getting clicked on the most.

Based in Palo Alto, ClipMine went on to make indexing tools for the e-sports field and to incorporate computer vision and machine learning into their work. Their platform’s ability to identify content within videos caught Twitch’s eye; Perez explains:

Traditionally, online video content is indexed much like the web – using metadata like titles, tags, descriptions, and captions. But Twitch’s streams are live, and don’t have as much metadata to index. That’s where a technology like ClipMine can help. Streamers don’t have to do anything differently than usual to have their videos indexed, instead, ClipMine will analyze and categorize the content in real-time.

ClipMine’s technology has already been incorporated into stream-discovery tools for two games from Blizzard Entertainment, “Overwatch” and “Hearthstone;” see the article for more specifics on how and why. Through its blog, Twitch indicates that more innovations are on the way.

Cynthia Murrell, September 18, 2017

Yet Another Digital Divide

September 8, 2017

Recommind sums up what happened at a recent technology convention in the article, “Why Discovery & ECM Haven’t, Must Come Together (CIGO Summit 2017 Recap).” Author Hal Marcus first discusses that he was a staunch challenge to anyone who said they could provide a complete information governance solution. He recently spoke at CIGO Summit 2017 about how to make information governance a feasible goal for organizations.

The problem with information governance is that there is no one simple solution and projects tend to be self-contained with only one goal: data collection, data reduction, etc. When he spoke he explained that there are five main reasons for there is not one comprehensive solution. They are that it takes a while to complete the project to define its parameters, data can come from multiple streams, mass-scale indexing is challenging, analytics will only help if there are humans to interpret the data, risk, and cost all put a damper on projects.

Yet we are closer to a solution:

Corporations seem to be dedicating more resources for data reduction and remediation projects, triggered largely by high profile data security breaches.

Multinationals are increasingly scrutinizing their data sharing and retention practices, spurred by the impending May 2018 GDPR deadline.

ECA for data culling is becoming more flexible and mature, supported by the growing availability and scalability of computing resources.

Discovery analytics are being offered at lower, all-you-can-eat rates, facilitating a range of corporate use cases like investigations, due diligence, and contract analysis

Tighter, more seamless and secure integration of ECM and discovery technology is advancing and seeing adoption in corporations, to great effect.

And it always seems farther away.

Whitney Grace, September 8, 2017

A New and Improved Content Delivery System

September 7, 2017

Personalized content and delivery is the name of the game in PRWEB’s, “Flatirons Solutions Launches XML DITA Dynamic Content Delivery Solutions.”  Flatirons Solutions is a leading XML-based publishing and content management company and they recently released their Dynamic Content Delivery Solution.  The Dynamic Content Delivery Solution uses XML-based technology will allow enterprises to receive more personalized content.  It is advertised that it will reduce publishing and support costs.  The new solution is built with the Mark Logic Server.

By partnering with Mark Logic and incorporating their industry-leading XML content server, the solution conducts powerful queries, indexing, and personalization against large collections of DITA topics. For our clients, this provides immediate access to relevant information, while producing cost savings in technical support, and in content production, maintenance, review and publishing. So whether they are producing sales, marketing, technical, training or help documentation, clients can step up to a new level of content delivery while simultaneously improving their bottom line.

The Dynamic Content Delivery Solution is designed for government agencies and enterprises that publish XML content to various platforms and formats.  Mark Logic is touted as a powerful tool to pool content from different sources, repurpose it, and deliver it to different channels.

MarkLogic finds success in its core use case: slicing and dicing for publishing.  It is back to the basics for them.

Whitney Grace, September 7, 2017

 

Former Google Employee Launches a New Kind of Search

September 1, 2017

We learn about a new approach to internet search from Business Insider’s piece, “Once Google’s Youngest Employee, this Woman Just Unveiled a New Search Company that Might Make Google Worried.” The new platform aims to cut through the traditional results list, which, depending on the search term(s), can take a lot of time to comb through. It also hopes to connect users to information that they didn’t know to search for. Reporter Caroline Cakebread writes:

Led by founder and CEO Falon Fatemi, Node emerged from stealth on Tuesday ready to take on its lofty goal of changing the way we discover information. By using AI to connect you or your business with the right opportunity at the right time, Node wants to ‘accelerate serendipity’ on the web. Node’s patent-pending technology works by indexing people, places, products, and companies instead of web pages, and using this data to connect customers to opportunities. So far, it has half a billion profiles. The AI understands the relationships between people and companies, and can marry its data layer with a customer’s personal data. Node is currently integrated with Salesforce, and customers can ask questions like ‘What company will be most interested in my product?’ Node will tell the customer who or what they need to connect with, why it came up with that answer, and even what to say to make the most of the opportunity. It’s searching without using a search box.

Node began as Fatemi’s personal project, and now her firm has raised $16.3 million in funding so far. She envisions her new tech as the “intelligence layer of the internet,” as Cakebread puts it, and believes any realm of life, from sales strategy to dating options, could benefit from this approach.

Fatemi started at Google while still in college. She wrote an article for Fast Company a couple years ago, “I Joined Google at 19. Here’s What I Learned,” in which she credits her time at Google with installing many of the qualities that have made her a successful entrepreneur. See that article for those lessons learned.

Cynthia Murrell, September 01, 2017

IBM Watson Deep Learning: A Great Leap Forward

August 16, 2017

I read in the IBM marketing publication Fortune Magazine. Oh, sorry, I meant the independent real business news outfit Fortune, the following article: “IBM Claims Big Breakthrough in Deep Learning.” (I know the write up is objective because the headline includes the word “claims.”)

The main point is that the IBM Watson super game winning thing can now do certain computational tasks more quickly is mildly interesting. I noticed that one of our local tire discounters has a sale on a brand called Primewell. That struck me as more interesting than this IBM claim.

First, what’s the great leap forward the article touts? I highlighted this passage:

IBM says it has come up with software that can divvy those tasks among 64 servers running up to 256 processors total, and still reap huge benefits in speed. The company is making that technology available to customers using IBM Power System servers and to other techies who want to test it.

How many IBM Power 8 servers does it take to speed up Watson’s indexing? I learned:

IBM used 64 of its own Power 8 servers—each of which links both general-purpose Intel microprocessors with Nvidia graphical processors with a fast NVLink interconnection to facilitate fast data flow between the two types of chips

A couple of questions:

  1. How much does it cost to outfit 64 IBM Power 8 servers to perform this magic?
  2. How many Nvidia GPUs are needed?
  3. How many Intel CPUs are needed?
  4. How much RAM is required in each server?
  5. How much time does it require to configure, tune, and deploy the set up referenced in the article?

My hunch is that this set up is slightly more costly than buying a Chrome book or signing on for some Amazon cloud computing cycles. These questions, not surprisingly, are not of interest to the “real” business magazine Fortune. That’s okay. I understand that one can get only so much information from a news release, a PowerPoint deck, or a lunch? No problem.

The other thought that crossed my mind as I read the story, “Does Fortune think that IBM is the only outfit using GPUs to speed up certain types of content processing?” Ah, well, IBM is probably so sophisticated that it is working on engineering problems that other companies cannot conceive let alone tackle.

Now the second point: Content processing to generate a Watson index is a bottleneck. However, the processing is what I call a downstream bottleneck. The really big hurdle for IBM Watson is the manual work required to set up the rules which the Watson system has to follow. Compared to the data crunching, training and rule making are the giant black holes of time and complexity. Fancy Dan servers don’t get to strut their stuff until the days, weeks, months, and years of setting up the rules is completed, tuned, and updated.

Fortune Magazine obviously considers this bottleneck of zero interest. My hunch is that IBM did not explain this characteristic of IBM Watson or the Achilles’ heel of figuring out the rules. Who wants to sit in a room with subject matter experts and three or four IBM engineers talking about what’s important, what questions are asked, and what data are required.

AskJeeves demonstrated decades ago that human crafted rules are Black Diamond ski runs. IBM Watson’s approach is interesting. But what’s fascinating is the uncritical acceptance of IBM’s assertions and the lack of interest in tackling substantive questions. Maybe lunch was cut short?

Stephen E Arnold, August 16, 2017

Smartlogic: A Buzzword Blizzard

August 2, 2017

I read “Semantic Enhancement Server.” Interesting stuff. The technology struck me as a cross between indexing, good old enterprise search, and assorted technologies. Individuals who are shopping for an automatic indexing systems (either with expensive, time consuming hand coded rules or a more Autonomy-like automatic approach) will want to kick the tires of the Smartlogic system. In addition to the echoes of the SchemaLogic approach, I noted a Thomson submachine gun firing buzzwords; for example:

best bets (I’m feeling lucky?)
dynamic summaries (like Island Software’s approach in the 1990s)
faceted search (hello, Endeca?)
model
navigator (like the Siderean “navigator”?)
real time
related topics (clustering like Vivisimo’s)
semantic (of course)
taxonomy
topic maps
topic pages (a Google report as described in US29970198481)
topic path browser (aka breadcrumbs?)
visualization

What struck me after I compiled this list about a system that “drives exceptional user search experiences” was that Smartlogic is repeating the marketing approach of traditional vendors of enterprise search. The marketing lingo and “one size fits all” triggered thoughts of Convera, Delphes, Entopia, Fast Search & Transfer, and Siderean Software, among others.

I asked myself:

Is it possible for one company’s software to perform such a remarkable array of functions in a way that is easy to implement, affordable, and scalable? There are industrial strength systems which perform many of these functions. Examples range from BAE’s intelligence system to the Palantir Gotham platform.

My hypothesis is that Smartlogic might struggle to process a real time flow of WhatsApp messages, YouTube content, and mobile phone intercept voice calls. Toss in the multi language content which is becoming increasingly important to enterprises, and the notional balloon I am floating says, “Generating buzzwords and associated over inflated expectations is really easy. Delivering high accuracy, affordable, and scalable content processing is a bit more difficult.”

Perhaps Smartlogic has cracked the content processing equivalent of the Voynich manuscript.

image

Will buzzwords crack the Voynich manuscript’s inscrutable text? What if Voynich is a fake? How will modern content processing systems deal with this type of content? Running some content processing tests might provide some insight into systems which possess Watson-esque capabilities.

What happened to those vendors like Convera, Delphes, Entopia, Fast Search & Transfer, and  Siderean Software, among others? (Free profiles of these companies are available at www.xenky.com/vendor-profiles.) Oh, that’s right. The reality of the marketplace did not match the companies’ assertions about technology. Investors and licensees of some of these systems were able to survive the buzzword blizzard. Some became the digital equivalent of Ötzi, 5,300 year old iceman.

Stephen E Arnold, August 2, 2017

ArnoldIT Publishes Technical Analysis of the Bitext Deep Linguistic Analysis Platform

July 19, 2017

ArnoldIT has published “Bitext: Breakthrough Technology for Multi-Language Content Analysis.” The analysis provides the first comprehensive review of the Madrid-based company’s Deep Linguistic Analysis Platform or DLAP. Unlike most next-generation multi-language text processing methods, Bitext has crafted a platform. The document can be downloaded from the Bitext Web site via this link.

Based on information gathered by the study team, the Bitext DLAP system outputs metadata with an accuracy in the 90 percent to 95 percent range.
Most content processing systems today typically deliver metadata and rich indexing with accuracy in the 70 to 85 percent range.

According to Stephen E Arnold, publisher of Beyond Search and Managing Director of Arnold Information Technology:

“Bitext’s output accuracy establish a new benchmark for companies offering multi-language content processing system.”

The system performs in near real time, more than 15 discrete analytic processes. The system can output enhanced metadata for more than 50 languages. The structured stream provides machine learning systems with a low cost, highly accurate way to learn. Bitext’s DLAP platform integrates more than 30 separate syntactic functions. These include segmentation, tokenization (word segmentation, frequency, and disambiguation, among others. The DLAP platform analyzes more  than 15 linguistic features of content in any of the more than 50 supported languages. The system extracts entities and generates high-value data about documents, emails, social media posts, Web pages, and structured and semi-structured data.

DLAP Applications range from fraud detection to identifying nuances in streams of data; for example, the sentiment or emotion expressed in a document. Bitext’s system can output metadata and other information about processed content as a feed stream to specialized systems such as Palantir Technologies’ Gotham or IBM’s Analyst’s Notebook. Machine learning systems such as those operated by such companies as Amazon, Apple, Google, and Microsoft can “snap in” the Bitext DLAP platform.

Copies of the report are available directly from Bitext at https://info.bitext.com/multi-language-content-analysis Information about Bitext is available at www.bitext.com.

Kenny Toth, July 19, 2017

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta