Understanding Intention: Fluffy and Frothy with a Few Factoids Folded In

October 16, 2017

Introduction

One of my colleagues forwarded me a document called “Understanding Intention: Using Content, Context, and the Crowd to Build Better Search Applications.” To get a copy of the collateral, one has to register at this link. My colleague wanted to know what I thought about this “book” by Lucidworks. That’s what Lucidworks calls the 25 page marketing brochure. I read the PDF file and was surprised at what I perceived as fluff, not facts or a cohesive argument.

image

The topic was of interest to my colleague because we completed a five month review and analysis of “intent” technology. In addition to two white papers about using smart software to figure out and tag (index) content, we had to immerse ourselves in computational linguistics, multi-language content processing technology, and semantic methods for “making sense” of text.

The Lucidworks’ document purported to explain intent in terms of content, context, and the crowd. The company explains:

With the challenges of scaling and storage ticked off the to-do list, what’s next for search in the enterprise? This ebook looks at the holy trinity of content, context, and crowd and how these three ingredients can drive a personalized, highly-relevant search experience for every user.

The presentation of “intent” was quite different from what I expected. The details of figuring out what content “means” were sparse. The focus was not on methodology but on selling integration services. I found this interesting because I have Lucidworks in my list of open source search vendors. These are companies which repackage open source technology, create some proprietary software, and assist organizations with engineering and integrating services.

The book was an explanation anchored in buzzwords, not the type of detail we expected. After reading the text, I was not sure how Lucidworks would go about figuring out what an utterance might mean. The intent-centric systems we reviewed over the course of five months followed several different paths.

Some companies relied upon statistical procedures. Others used dictionaries and pattern matching. A few combined multiple approaches in a content pipeline. Our client, a firm based in Madrid, focused on computational linguistics plus a series of procedures which combined proprietary methods with “modules” to perform specific functions. The idea for this approach was to reduce the errors in intent identification from accuracy between 65 percent to 80 percent to accuracy approaching and often exceeding 90 percent. For text processing in multi-language corpuses, the Spanish company’s approach was a breakthrough.

I was disappointed but not surprised that Lucidworks’ approach was breezy. One of my colleagues used the word “frothy” to describe the information in the “Understanding Intention” document.

As I read the document, which struck me as a shotgun marriage of generalizations and examples of use cases in which “intent” was important, I made some notes.

Let me highlight five of the observations I made. I urge you to read the original Lucidworks’ document so you can judge the Lucidworks’ arguments for yourself.

Imitation without Attribution

My first reaction was that Lucidworks had borrowed conceptually from ideas articulated by Dr. Gregory Grefenstette and his book Search Based Applications: At the Confluence of Search and Database Technologies. You can purchase this 2011 book on Amazon at this link. Lucidworks’ approach, unlike Dr. Grefenstette’s borrowed some of the analysis but did not include the detail which supports the increasing importance of using search as a utility within larger information access solutions. Without detail, the Lucidworks’ document struck me as a description of the type of solutions that a company like Tibco is now offering its customers.

Read more

Voice Search: An Amazon and Google Dust Up

January 26, 2017

I read “Amazon and Google Fight Crucial Battle over Voice Recognition.” I like the idea that Amazon and Google are macho brawlers. I think of folks who code as warriors. Hey, just because some programmers wear their Comicon costumes to work in Mountain View and Seattle, some may believe that code masters are wimps. Obviously they are not. The voice focused programmers are tough, tough dudes and dudettes.

I learned from a “real” British newspaper that two Viking-inspired warrior cults are locked in a battle. The fate of the voice search world hangs in the balance. Why is this dust up covered in more depth on Entertainment Tonight or the talking head “real” news television programs.

I learned:

The retail giant has a threatening lead over its rival with the Echo and Alexa, as questions remain over how the search engine can turn voice technology into revenue.

What? If there is a battle, it seems that Amazon has a “threatening lead.” How will Google respond? Online advertising? New products like the Pixel which, in some areas, is not available due to production and logistics issues?

No. Here’s the scoop from the Fleet Street experts:

The risk to Google is that at the moment, almost everyone starting a general search at home begins at Google’s home page on a PC or phone. That leads to a results page topped by text adverts – which help generate about 90% of Google’s revenue, and probably more of its profits. But if people begin searching or ordering goods via an Echo, bypassing Google, that ad revenue will fall. And Google has cause to be uncomfortable. The shift from desktop to mobile saw the average number of searches per person fall as people moved to dedicated apps; Google responded by adding more ads to both desktop and search pages, juicing revenues. A shift that cut out the desktop in favor of voice-oriented search, or no search at all, would imperil its lucrative revenue stream.

Do I detect a bit of glee in this passage? Google is responding in what is presented as a somewhat predictable way:

Google’s natural reaction is to have its own voice-driven home system, in Home. But that poses a difficulty, illustrated by the problems it claims to solve. At the device’s launch, one presenter from the company explained how it could speak the answer to questions such as “how do you get wine stains out of a rug?” Most people would pose that question on a PC or mobile, and the results page would offer a series of paid-for ads. On Home, you just get the answer – without ads.

Hasn’t Google read “The Art of War” which advises:

“Let your plans be dark and impenetrable as night, and when you move, fall like a thunderbolt.”

My hunch is that this “real” news write up is designed to poke the soft underbelly of Googzilla. That sounds like a great idea. Try this with your Alexa, “Alexa, how do I hassle Google?”

Stephen E Arnold, January 26, 2017

Some Things Change, Others Do Not: Google and Content

January 20, 2017

After reading Search Engine Journal’s, “The Evolution Of Semantic Search And Why Content Is Still King” brings to mind how there RankBrain is changing the way Google ranks search relevancy.  The article was written in 2014, but it stresses the importance of semantic search and SEO.  With RankBrain, semantic search is more of a daily occurrence than something to strive for anymore.

RankBrain also demonstrates how far search technology has come in three years.  When people search, they no longer want to fish out the keywords from their query; instead they enter an entire question and expect the search engine to understand.

This brings up the question: is content still king?  Back in 2014, the answer was yes and the answer is a giant YES now.  With RankBrain learning the context behind queries, well-written content is what will drive search engine ranking:

What it boils to is search engines and their complex algorithms are trying to recognize quality over fluff. Sure, search engine optimization will make you more visible, but content is what will keep people coming back for more. You can safely say content will become a company asset because a company’s primary goal is to give value to their audience.

The article ends with something about natural language and how people want their content to reflect it.  The article does not provide anything new, but does restate the value of content over fluff.  What will happen when computers learn how to create semantic content, however?

Whitney Grace, January 20, 2016

Indexing: A Cautionary Example

November 17, 2015

i read “Half of World’s Museum Specimens Are Wrongly Labeled, Oxford University Finds.” Anyone involved in indexing knows the perils of assigning labels, tags, or what the whiz kids call metadata to an object.

Humans make mistakes. According to the write up:

As many as half of all natural history specimens held in the some of the world’s greatest institutions are probably wrongly labeled, according to experts at Oxford University and the Royal Botanic Garden in Edinburgh. The confusion has arisen because even accomplished naturalists struggle to tell the difference between similar plants and insects. And with hundreds or thousands of specimens arriving at once, it can be too time-consuming to meticulously research each and guesses have to be made.

Yikes. Only half. I know that human indexers get tired. Now there is just too much work to do. The reaction is typical of busy subject matter experts. Just guess. Close enough for horse shoes.

What about machine indexing? Anyone who has retrained an HP Autonomy system knows that humans get involved as well. If humans make mistakes with bugs and weeds, imagine what happens when a human has to figure out a blog post in a dialect of Korean.

The brutal reality is that indexing is a problem. When dealing with humans, the problems do not go away. When humans interact with automated systems, the automated systems make mistakes, often more rapidly than the sorry human indexing professionals do.

What’s the point?

I would sum up the implication as:

Do not believe a human (indexing species or marketer of automated indexing species).

Acceptable indexing with accuracy above 85 percent is very difficult to achieve. Unfortunately the graduates of a taxonomy boot camp or the entrepreneur flogging an automatic indexing system which is powered by artificial intelligence may not be reliable sources of information.

I know that this notion of high error rates is disappointing to those who believe their whizzy new system works like a champ.

Reality is often painful, particularly when indexing is involved.

What are the consequences? Here are three:

  1. Results of queries are incomplete or just wrong
  2. Users are unaware of missing information
  3. Failure to maintain either human, human assisted, or automated systems results in indexing drift. Eventually the indexing is just misleading if not incorrect.

How accurate is your firm’s indexing? How accurate is your own indexing?

Stephen E Arnold, November 17, 2015

Exclusive Interview: Danny Rogers, Terbium Labs

August 11, 2015

Editor’s note: The full text of the exclusive interview with Dr. Daniel J. Rogers, co-founder of Terbium Labs, is available on the Xenky Cyberwizards Speak Web service at www.xenky.com/terbium-labs. The interview was conducted on August 4, 2015.

Significant innovations in information access, despite the hyperbole of marketing and sales professionals, are relatively infrequent. In an exclusive interview, Danny Rogers, one of the founders of Terbium Labs, has developed a way to flip on the lights to make it easy to locate information hidden in the Dark Web.

Web search has been a one-trick pony since the days of Excite, HotBot, and Lycos. For most people, a mobile device takes cues from the user’s location and click streams and displays answers. Access to digital information requires more than parlor tricks and pay-to-play advertising. A handful of companies are moving beyond commoditized search, and they are opening important new markets such as secret and high value data theft. Terbium Labs can “illuminate the Dark Web.”

In an exclusive interview, Dr. Danny Rogers, one of the founders of Terbium Labs with Michael Moore, explained the company’s ability to change how data breaches are located. He said:

Typically, breaches are discovered by third parties such as journalists or law enforcement. In fact, according to Verizon’s 2014 Data Breach Investigations Report, that was the case in 85% of data breaches. Furthermore, discovery, because it is by accident, often takes months, or may not happen at all when limited personnel resources are already heavily taxed. Estimates put the average breach discovery time between 200 and 230 days, an exceedingly long time for an organization’s data to be out of their control. We hope to change that. By using Matchlight, we bring the breach discovery time down to between 30 seconds and 15 minutes from the time stolen data is posted to the web, alerting our clients immediately and automatically. By dramatically reducing the breach discovery time and bringing that discovery into the organization, we’re able to reduce damages and open up more effective remediation options.

Terbium’s approach, it turns out, can be applied to traditional research into content domains to which most systems are effectively blind. At this time, a very small number of companies are able to index content that is not available to traditional content processing systems. Terbium acquires content from Web sites which require specialized software to access. Terbium’s system then processes the content, converting it into the equivalent of an old-fashioned fingerprint. Real-time pattern matching makes it possible for the company’s system to locate a client’s content, either in textual form, software binaries, or other digital representations.

One of the most significant information access innovations uses systems and methods developed by physicists to deal with the flood of data resulting from research into the behaviors of difficult-to-differentiate sub atomic particles.

One part of the process is for Terbium to acquire (crawl) content and convert it into encrypted 14 byte strings of zeros and ones. A client such as a bank then uses the Terbium content encryption and conversion process to produce representations of the confidential data, computer code, or other data. Terbium’s system, in effect, looks for matching digital fingerprints. The task of locating confidential or proprietary data via traditional means is expensive and often a hit and miss affair.

Terbium Labs changes the rules of the game and in the process has created a way to provide its licensees with anti-fraud and anti-theft measures which are unique. In addition, Terbium’s digital fingerprints make it possible to find, analyze, and make sense of digital information not previously available. The system has applications for the Clear Web, which millions of people access every minute, to the hidden content residing on the so called Dark Web.

image

Terbium Labs, a start up located in Baltimore, Maryland, has developed technology that makes use of advanced mathematics—what I call numerical recipes—to perform analyses for the purpose of finding connections. The firm’s approach is one that deals with strings of zeros and ones, not the actual words and numbers in a stream of information. By matching these numerical tokens with content such as a data file of classified documents or a record of bank account numbers, Terbium does what strikes many, including myself, as a remarkable achievement.

Terbium’s technology can identify highly probable instances of improper use of classified or confidential information. Terbium can pinpoint where the compromised data reside on either the Clear Web, another network, or on the Dark Web. Terbium then alerts the organization about the compromised data and work with the victim of Internet fraud to resolve the matter in a satisfactory manner.

Terbium’s breakthrough has attracted considerable attention in the cyber security sector, and applications of the firm’s approach are beginning to surface for disciplines from competitive intelligence to health care.

Rogers explained:

We spent a significant amount of time working on both the private data fingerprinting protocol and the infrastructure required to privately index the dark web. We pull in billions of hashes daily, and the systems and technology required to do that in a stable and efficient way are extremely difficult to build. Right now we have over a quarter trillion data fingerprints in our index, and that number is growing by the billions every day.

The idea for the company emerged from a conversation with a colleague who wanted to find out immediately if a high profile client list was ever leaded to the Internet. But, said Rogers, “This individual could not reveal to Terbium the list itself.”

How can an organization locate secret information if that information cannot be provided to a system able to search for the confidential information?

The solution Terbium’s founders developed relies on novel use of encryption techniques, tokenization, Clear and Dark Web content acquisition and processing, and real time pattern matching methods. The interlocking innovations have been patented (US8,997,256), and Terbium is one of the few, perhaps the only company in the world, able to crack open Dark Web content within regulatory and national security constraints.

Rogers said:

I think I have to say that the adversaries are winning right now. Despite billions being spent on information security, breaches are happening every single day. Currently, the best the industry can do is be reactive. The adversaries have the perpetual advantage of surprise and are constantly coming up with new ways to gain access to sensitive data. Additionally, the legal system has a long way to go to catch up with technology. It really is a free-for-all out there, which limits the ability of governments to respond. So right now, the attackers seem to be winning, though we see Terbium and Matchlight as part of the response that turns that tide.

Terbium’s product is Matchlight. According to Rogers:

Matchlight is the world’s first truly private, truly automated data intelligence system. It uses our data fingerprinting technology to build and maintain a private index of the dark web and other sites where stolen information is most often leaked or traded. While the space on the internet that traffics in that sort of activity isn’t intractably large, it’s certainly larger than any human analyst can keep up with. We use large-scale automation and big data technologies to provide early indicators of breach in order to make those analysts’ jobs more efficient. We also employ a unique data fingerprinting technology that allows us to monitor our clients’ information without ever having to see or store their originating data, meaning we don’t increase their attack surface and they don’t have to trust us with their information.

For more information about Terbium, navigate to the company’s Web site. The full text of the interview appears on Stephen E Arnold’s Xenky cyberOSINT Web site at http://bit.ly/1TaiSVN.

Stephen E Arnold, August 11, 2015

Watson: Following in the Footsteps of America Online with PR, not CD ROMs

July 31, 2015

I am now getting interested in the marketing efforts of IBM Watson’s professionals. I have written about some of the items which my Overflight system snags.

I have gathered a handful of gems from the past week or so. As you peruse these items, remember several facts:

  • Watson is Lucene, home brew scripts, and acquired search utilities like Vivisimo’s clustering and de-duplicating technology
  • IBM said that Watson would be a multi billion dollar business and then dropped that target from 10 or 12 Autonomy scale operations to something more modest. How modest the company won’t say.
  • IBM has tallied a baker’s dozen of quarterly reports with declining revenues
  • IBM’s reallocation of employee resources continues as IBM is starting to run out of easy ways to trim expenses
  • The good old mainframe is still a technology wonder, and it produces something Watson only dreams about: Profits.

Here we go. Remember high school English class and the “willing suspension of disbelief.” Keep that in mind, please.

ITEM 1: “IBM Watson to Help Cities Run Smarter.” The main assertion, which comes from unicorn land, is: “Purple Forge’s “Powered by IBM Watson” solution uses Watson’s question answering and natural language processing capabilities to let users  ask questions and get evidence-based answers using a website, smartphone or wearable devices such as the Apple Watch, without having to wait for a call agent or a reply to an email.” There you go. Better customer service. Aren’t government’s supposed to serve its citizens? Does the project suggest that city governments are not performing this basic duty? Smarter? Hmm.

ITEM 2: “Why I’m So Excited about Watson, IBM’s Answer Man.” In this remarkable essay, an “expert” explains that the president of IBM explained to a TV interviewer that IBM was being “reinvented.” Here’s the quote that I found amusing: “IBM invented almost everything about data,” Rometty insisted. “Our research lab was the first one ever in Silicon Valley. Creating Watson made perfect sense for us. Now he’s ready to help everyone.” Now the author is probably unaware that I was, lo, these many years ago, involved with an IBM Herb Noble who was struggling to make IBM’s own and much loved STAIRS III work. I wish to point out that Silicon Valley research did not have its hands on the steering wheel when it came to the STAIRS system. In fact, the job of making this puppy work fell to IBM folks in Germany as I recall.

ITEM 3: “IBM Watson, CVS Deal: How the Smartest Computer on Earth Could Shake Up Health Care for 70m Pharmacy Customers.” Now this is an astounding chunk of public relations output. I am confident that the author is confident that “real journalism” was involved. You know: Interviewing, researching, analyzing, using Watson, talking to customers, etc. Here’s the passage I highlighted: “One of the most frustrating things for patients can be a lack of access to their health or prescription history and the ability to share it. This is one of the things both IBM and CVS officials have said they hope to solve.” Yes, hope. It springs eternal as my mother used to say.

If you find these fact filled romps through the market activating technology of Watson, you may be qualified to become a Watson believer. For me, I am reminded of Charles Bukowski’s alleged quip:

The problem with the world is that the intelligent people are full of doubts while the stupid ones are full of confidence.

Stephen E Arnold, July 31, 2015

Google Information Retrieval: Not Just Acceptable. Insanely Better

June 12, 2015

Like the TSA’s perfect bag, Google’s search is the apex of findability, according to “Google Now Has Just Gotten Insanely Better and Very Freaky.” What causes such pinnacles of praise? According to the write up:

Google announced at an event in Paris a Location Aware Search feature that can answer a new set of questions, without the user having to ask questions that should include addresses or proper place names. Asking Google Now questions like “what is this museum?” or “when was this building built?” in proximity of the Louvre in Paris will get you answers about the Louvre, as Google will be able to use your location and understand what you meant by “this” or “this building”.

How does the feature work when one is looking for information about the location of a Dark Web hidden services server in Ashburn, Virginia? Ah, not so helpful perhaps? What’s the value of a targeted message in this insanely better environment? Good question.

Stephen E Arnold, June 12, 2015

CyberOSINT Videos

May 12, 2015

Xenky.com has posted a single page which provides one click access to the three CyberOSINT videos. The videos provide highlight of Stephen E Arnold’s new monograph about next generation information access. You can explore the videos which run a total of 30 minutes on the Xenky site. One viewer said, “This has really opened my eyes. Thank you.”

Kenny Toth, May 12, 2015

Search Left Out of the Collaborative Economy Honeycomb

May 8, 2015

I must admit that I knew very little about the collaborative economy. I used AirBnB once time and worried about my little test. I survived. I rode in an Uber car one time because my son is an aficionado. I am okay with the subway and walking. I ignore apps which allegedly make my life better, faster, and more expensive.

I saw a post which pointed me to the Chief Digital Officer Summit and that pointed me to this page with the amazing honeycomb shown below. The title is “Collaborative Economy Honeycomb 2: Watch It Grow

collaborative honeycomb

The hexagons are okay, but the bulk of the write up is a listing of companies which manifest the characteristics of a collaborative honeycomb outfit.

Most of the companies were unfamiliar to me. I did recognize the names of a couple of the honeycombers; for example, Khan Academy, Etsy, eBay (ah, delightful eBay), Craigslist, Freelancer, the Crypto currencies (yep, my Dark Web work illuminated this hexagon in the honeycomb for me), and Indiegogo (I met the founder at a function in Manhattan).

But the other 150 companies in the list were news to me.

But what caused me to perk up and pay attention was one factoid:

There were zero search, content processing, or next generation information access companies in the list.

I formed a hypothesis which will probably give indigestion to the individuals and financial services firm pumping money into search and content processing companies. Here it is:

The wave of innovation captured in the wonky honeycomb is moving forward with search as an item on a checklist. The finding functions of these outfits boil down to social media buzz and niche marketing. Information access is application centric, not search centric.

If I am correct, why would honeycomb companies in collaboration mode want to pump money into a proprietary keyword search system? Why not use open source software and put effort into features for the app crowd?

Net net: Generating big money from organic license deals may be very difficult if the honeycomb analysis is on the beam. How hard will it be to sell a high priced search system to the companies identified in this analysis? I think that the task might be difficult and time consuming.

the good news is that the list of companies provides outfits like Attivio, BA Insight, Coveo, Recommind, Smartlogic, and other information retrieval firms with some ducks at which to shoot. How many ducks will fall in a fusillade of marketing?

One hopes that the search sharpshooters prevail.

Stephen E Arnold, May 8, 2015

Did You Know Oracle and WCC Go Beyond Search?

January 10, 2015

I love the phrase “beyond search.” Microsoft uses it, working overtime to become the go-to resource for next generation search. I learned that Oracle also finds the phrase ideal for describing the lash up of traditional database technology, the decades old Endeca technology, and the Dutch matching system from WCC Group.

You can read about this beyond search tie up in “Beyond Search in Policing: How Oracle Redefines Real time Policing and Investigation—Complementary Capabilities of Oracle’s Endeca Information Discovery and WCC’s ELISE.”

The white paper explains in 15 pages how intelligence led policing works. I am okay with the assertions, but I wonder if Endeca’s computationally intensive approach is suitable for certain content processing tasks. The meshing of matching with Endeca’s outputs results in an “integrated policing platform.”

The Oracle marketing piece explains ELISE in terms of “Intelligent Fusion.” Fusion is quite important in next generation information access. The diagram explaining ELISE is interesting:

image

Endeca’s indexing makes use of the MDex storage engine, which works quite well for certain types of applications; for example, bounded content and point-and-click access. Oracle shows this in terms of Endeca’s geographic output as a mash up:

image

For me, the most interesting part of the marketing piece was this diagram. It shows how two “search” systems integrate to meet the needs of modern police work:

image

It seems that WCC’s technology, also used for matching candidates with jobs, looks for matches and then Endeca adds an interface component once the Endeca system has worked through its computational processes.

For Oracle, ELISE and Endeca provide two legs of Oracle’s integrated police case management system.

Next generation information access systems move “beyond search” by integrating automated collection, analytics, and reporting functions. In my new monograph for law enforcement and intelligence professionals, I profile 21 vendors who provide NGIA. Oracle may go “beyond search,” but the company has not yet penetrated NGIA, next generation information access. More streamlined methods are required to cope with the type of data flows available to law enforcement and intelligence professionals.

For more information about NGIA, navigate to www.xenky.com/cyberosint.

Stephen E Arnold, January 10, 2015

Next Page »

  • Archives

  • Recent Posts

  • Meta