Superior Customer Service Promised through the Accenture Virtual Agent Amelia

August 17, 2016

The article titled Accenture Forms New Business Unit Around IPsoft’s Amelia AI Platform on ZDNet introduces Amelia as a virtual agent capable of providing services in industries such as banking, insurance, and travel. Amelia looks an awful lot like Ava from the film Ex Machina, wherein an AI robot manipulates a young programmer by appealing to his empathy. Similarly, Accenture’s Amelia is supposed to be far more expressive and empathetic than her kin in the female AI world such as Siri or Amazon’s Alexa. The article states,

“Accenture said it will develop a suite of go-to-market strategies and consulting services based off of the Amelia platform…the point is to appeal to executives who “are overwhelmed by the plethora of technologies and many products that are advertising AI or Cognitive capabilities”…For Accenture, the formation of the Amelia practice is the latest push by the company to establish a presence in the rapidly expanding AI market, which research firm IDC predicts will reach $9.2 billion by 2019.”

What’s that behind Amelia, you ask? Looks like a parade of consultants ready and willing to advise the hapless executives who are so overwhelmed by their options. The Amelia AI Platform is being positioned as a superior customer service agent who will usher in the era of digital employees.

Chelsea Kerwin, August 17, 2016

Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

There is a Louisville, Kentucky Hidden /Dark Web meet up on August 23, 2016.
Information is at this link: https://www.meetup.com/Louisville-Hidden-Dark-Web-Meetup/events/233019199/

Google: The Math Club Struggles to Go Steady

May 23, 2016

I read “Google’s Go to Market Gap.” The write up points out that the Alphabet Google thing has a flaw. The disconnect between the vision and the reality was the theme of my monograph “Google: The Digital Gutenberg.” Alas, the report is out of print because the savvy publisher woke up one morning and realized that he was not savvy. Too bad.

The point I noted is:

…social networks and messaging services are not only closed but nearly impossible to compete with.

Google now finds itself on the outside looking in many promising markets. Amazon nuked Google’s on again, off again shopping service from the Google catalogs to today’s Google Shopping. Google is in the game of trying to shift from its PC based search and ad model by playing simultaneous games of:

  • Me too. Example: Google’s “answer” to the Echo
  • Buy, buy, buy. Examples: Google’s acquisitions which seem to fade or freeze like Blogger.com
  • Innovate, innovate, innovate. Example: The new 20 percent time effort to build intrapreneurship
  • Dilution. Example: Ads which have minimal relevance to a user’s query.

The write up states:

The problem is that as much as Google may be ahead, the company is also on the clock: every interaction with Siri, every signal sent to Facebook, every command answered by Alexa, is one that is not only not captured by Google but also one that is captured by its competitors. Yes, it is likely Apple, Facebook, and Amazon are all behind Google when it comes to machine learning and artificial intelligence — hugely so, in many cases — but it is not a fair fight. Google’s competitors, by virtue of owning the customer, need only be good enough, and they will get better. Google has a far higher bar to clear — it is asking users and in some cases their networks to not only change their behavior but willingly introduce more friction into their lives — and its technology will have to be special indeed to replicate the company’s original success as a business.

When I was in high school, most of the lads and lasses in George Carlin’s algorithmic love fest did not go to the prom. The Alphabet Google thing, as I have stated many times, is like my high school math club on steroids. Prom is coming? Take an algorithm to the party? Sure, but why not ask IBM Watson? No date yet I hear.

Stephen E Arnold, May 23, 2016

UK Cybersecurity Director Outlines Agencys Failures in Ongoing Cyberwar

April 8, 2016

The article titled GCHQ: Spy Chief Admits UK Agency Losing Cyberwar Despite £860M Funding Boost on International Business Times examines the surprisingly frank confession made by Alex Dewdney, a director at the Government Communications Headquarters (GCHQ). He stated that in spite of the £860M funneled into cybersecurity over the past five years, the UK is unequivocally losing the fight. The article details,

“To fight the growing threat from cybercriminals chancellor George Osborne recently confirmed that, in the next funding round, spending will rocket to more than £3.2bn. To highlight the scale of the problem now faced by GCHQ, Osborne claimed the agency was now actively monitoring “cyber threats from high-end adversaries” against 450 companies across the UK aerospace, defence, energy, water, finance, transport and telecoms sectors.”

The article makes it clear that search and other tools are not getting the job done. But a major part of the problem is resource allocation and petty bureaucratic behavior. The money being poured into cybersecurity is not going towards updating the “legacy” computer systems still in place within GCHQ, although those outdated systems represent major vulnerabilities. Dewdney argues that without basic steps like migrating to an improved, current software, the agency has no hope of successfully mitigating the security risks.

 

Chelsea Kerwin, April 8, 2016

Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

 

Patents and Semantic Search: No Good, No Good

March 31, 2016

I have been working on a profile of Palantir (open source information only, however) for my forthcoming Dark Web Notebook. I bumbled into a video from an outfit called ClearstoneIP. I noted that ClearstoneIP’s video showed how one could select from a classification system. With every click,the result set changed. For some types of searching, a user may find the point-and-click approach helpful. However, there are other ways to root through what appears to be patent applications. There are the very expensive methods happily provided by Reed Elsevier and Thomson Reuters, two find outfits. And then there are less expensive methods like Alphabet Google’s odd ball patent search system or the quite functional FreePatentsOnline service. In between, you and I have many options.

None of them is a slam dunk. When I was working through the publicly accessible Palantir Technologies’ patents, I had to fall back on my very old-fashioned method. I tracked down a PDF, printed it out, and read it. Believe me, gentle reader, this is not the most fun I have ever had. In contrast to the early Google patents, Palantir’s documents lack the detailed “background of the invention” information which the salad days’ Googlers cheerfully presented. Palantir’s write ups are slogs. Perhaps the firm’s attorneys were born with dour brain circuitry.

I did a side jaunt and came across a white paper from ClearstoneIP called “Why Semantic Searching Fails for Freedom-to-Operate (FTO).”i The 12 page write up is from a company called ClearstoneIP, which is a patent analysis company. The firm’s 12 pager is about patent searching. The company, according to its Web site is a “paradigm shifter.” The company describes itself this way:

ClearstoneIP is a California-based company built to provide industry leaders and innovators with a truly revolutionary platform for conducting product clearance, freedom to operate, and patent infringement-based analyses. ClearstoneIP was founded by a team of forward-thinking patent attorneys and software developers who believe that barriers to innovation can be overcome with innovation itself.

The “freedom to operate” phrase is a bit of legal jargon which I don’t understand. I am, thank goodness, not an attorney.

The firm’s search method makes much of the ontology, taxonomy, classification approach to information access. Hence, the reason my exploration of Palantir’s dynamic ontology with objects tossed ClearstoneIP into one of my search result sets.

The white paper is interesting if one works around the legal mumbo jumbo. The company’s approach is remarkable and invokes some of my caution light words; for example:

  • “Not all patent searches are the same.”, page two
  • “This all leads to the question…”, page seven
  • “…there is never a single “right” way to do so.”, page eight
  • “And if an analyst were to try to capture all of the ways…”, page eight
  • “to capture all potentially relevant patents…”, page nine.

The absolutist approach to argument is fascinating.

Okay, what’s the ClearstoneIP search system doing? Well, it seems to me that it is taking a path to consider some of the subtlties in patent claims’ statements. The approach is very different from that taken by Brainware and its tri-gram technology. Now that Lexmark owns Brainware, the application of the Brainware system to patent searching has fallen off my radar. Brainware relied on patterns; ClearstoneIP uses the ontology-classification approach.

Both are useful in identifying patents related to a particular subject.

What is interesting in the write up is its approach to “semantics.” I highlighted in billable hour green:

Anticipating all the ways in which a product can be described is serious guesswork.

Yep, but isn’t that the role of a human with relevant training and expertise becomes important? The white paper takes the approach that semantic search fails for the ClearstoneIP method dubbed FTO or freedom to operate information access.

The white paper asserted:

Semantic

Semantic searching is the primary focus of this discussion, as it is the most evolved.

ClearstoneIP defines semantic search in this way:

Semantic patent searching generally refers to automatically enhancing a text -based query to better represent its underlying meaning, thereby better identifying conceptually related references.

I think the definition of semantic is designed to strike directly at the heart of the methods offered to lawyers with paying customers by Lexis-type and Westlaw-type systems. Lawyers to be usually have access to the commercial-type services when in law school. In the legal market, there are quite a few outfits trying to provide better, faster, and sometimes less expensive ways to make sense of the Miltonesque prose popular among the patent crowd.

The white paper, in a lawyerly way, the approach of semantic search systems. Note that the “narrowing” to the concerns of attorneys engaged in patent work is in the background even though the description seems to be painted in broad strokes:

This process generally includes: (1) supplementing terms of a text-based query with their synonyms; and (2) assessing the proximity of resulting patents to the determined underlying meaning of the text – based query. Semantic platforms are often touted as critical add-ons to natural language searching. They are said to account for discrepancies in word form and lexicography between the text of queries and patent disclosure.

The white paper offers this conclusion about semantic search:

it [semantic search] is surprisingly ineffective for FTO.

Seems reasonable, right? Semantic search assumes a “paradigm.” In my experience, taxonomies, classification schema, and ontologies perform the same intellectual trick. The idea is to put something into a cubby. Organizing information makes manifest what something is and where it fits in a mental construct.

But these semantic systems do a lousy job figuring out what’s in the Claims section of a patent. That’s a flaw which is a direct consequence of the lingo lawyers use to frame the claims themselves.

Search systems use many different methods to pigeonhole a statement. The “aboutness” of a statement or a claim is a sticky wicket. As I have written in many articles, books, and blog posts, finding on point information is very difficult. Progress has been made when one wants a pizza. Less progress has been made in finding the colleagues of the bad actors in Brussels.

Palantir requires that those adding content to the Gotham data management system add tags from a “dynamic ontology.” In addition to what the human has to do, the Gotham system generates additional metadata automatically. Other systems use mostly automatic systems which are dependent on a traditional controlled term list. Others just use algorithms to do the trick. The systems which are making friends with users strike a balance; that is, using human input directly or indirectly and some administrator only knowledgebases, dictionaries, synonym lists, etc.

ClearstoneIP keeps its eye on its FTO ball, which is understandable. The white paper asserts:

The point here is that semantic platforms can deliver effective results for patentability searches at a reasonable cost but, when it comes to FTO searching, the effectiveness of the platforms is limited even at great cost.

Okay, I understand. ClearstoneIP includes a diagram which drives home how its FTO approach soars over the competitors’ systems:

image

ClearstoneIP, © 2016

My reaction to the white paper is that for decades I have evaluated and used information access systems. None of the systems is without serious flaws. That includes the clever n gram-based systems, the smart systems from dozens of outfits, the constantly reinvented keyword centric systems from the Lexis-type and Westlaw-type vendor, even the simplistic methods offered by free online patent search systems like Pat2PDF.org.

What seems to be reality of the legal landscape is:

  1. Patent experts use a range of systems. With lots of budget, many fee and for fee systems will be used. The name of the game is meeting the client needs and obviously billing the client for time.
  2. No patent search system to which I have been exposed does an effective job of thinking like an very good patent attorney. I know that the notion of artificial intelligence is the hot trend, but the reality is that seemingly smart software usually cheats by formulating queries based on analysis of user behavior, facts like geographic location, and who pays to get their pizza joint “found.”
  3. A patent search system, in order to be useful for the type of work I do, has to index germane content generated in the course of the patent process. Comprehensiveness is simply not part of the patent search systems’ modus operandi. If there’s a B, where’s the A? If there is a germane letter about a patent, where the heck is it?

I am not on the “side” of the taxonomy-centric approach. I am not on the side of the crazy semantic methods. I am not on the side of the keyword approach when inventors use different names on different patents, Babak Parviz aliases included. I am not in favor of any one system.

How do I think patent search is evolving? ClearstoneIP has it sort of right. Attorneys have to tag what is needed. The hitch in the git along has been partially resolved by Palantir’’-type systems; that is, the ontology has to be dynamic and available to anyone authorized to use a collection in real time.

But for lawyers there is one added necessity which will not leave us any time soon. Lawyers bill; hence, whatever is output from an information access system has to be read, annotated, and considered by a semi-capable human.

What’s the future of patent search? My view is that there will be new systems. The one constant is that, by definition, a lawyer cannot trust the outputs. The way to deal with this is to pay a patent attorney to read patent documents.

In short, like the person looking for information in the scriptoria at the Alexandria Library, the task ends up as a manual one. Perhaps there will be a friendly Boston Dynamics librarian available to do the work some day. For now, search systems won’t do the job because attorneys cannot trust an algorithm when the likelihood of missing something exists.

Oh, I almost forget. Attorneys have to get paid via that billable time thing.

Stephen E Arnold, March 30, 2016

Artificial Intelligence Fun: The Amazon Speech Recognition Function

March 18, 2016

I read “Amazon’s Alexa Went Bonkers, Reset User’s Thermostat.” Alexa is an Amazon smart product. The idea is that one talks to it in order to perform certain home automation tasks. Hey, it is tough to punch the button on a stereo system. Folks are really busy these days.

According to the write up:

one of the things Alexa apparently cannot do quite so well is determine who her master is. During a recent NPR broadcast about Alexa and the Echo, listeners at home noticed strange activity on their own Echo devices. Any time the radio reporter gave an example of an Alexa command, several Alexas across the country pricked up their ears and leapt into action — with surprising results.

There you go. A smart device which is unable to figure out which human voice to obey.

Here is one of the examples cited in the write up:

“Listener Roy Hagar wrote in to say our story prompted his Alexa to reset his thermostat to 70 degrees,”wrote NPR on a blog recounting the tale.

Smart devices with intelligence do not—I repeat—run into objects nor do they change thermostat settings. Humans are at fault. When one uses a next generation search system to identify the location of a bad actor, nothing will go wrong.

Stephen E Arnold, March 18, 2016

IBM Supercomputer: Slick and Speedy

December 29, 2015

I read an unusual chunk of content marketing for IBM’s supercomputer. As you may know, IBM captured a US government project for supercomputers. I am not sure if IBM is in the quantum computing hunt, but I assume the IBM marketing folks will make this clear as the PR machine grinds forward in 2016.

The article on my radar is the link baity “Scientists Discover Oldest Words in the English Language, Predict Which Ones Are Likely to Disappear.”

First, the supercomputer rah rah from a university in the UK:

The IBM supercomputer at the University of Reading, known as ThamesBlue, is now one year old. Before it arrived, it took an average of six weeks to perform a computational task such as comparing two sets of words in different languages, now these same tasks can be executed in a few hours. Professor Vassil Alexandrov, the University’s leading expert on computational science and director of the University’s ACET Centre¹ said: “The new IBM supercomputer has allowed the University of Reading to push to the forefront of the research community. It underpins other important research at the university, including the development of accurate predictive models for environmental use. Based on weather patterns and the amounts of pollutant in the atmosphere, our scientists have been able to pinpoint likely country-by-country environmental impacts, such as the affect airborne chemicals will have on future crop yields and cross-border pollution”.

There you go. Testimony. Look at the wonderful use case for the IBM supercomputer: Environmental impact analyses.

Now back to the language research. It seems to me that the academic research scientists are comparing word lists. The concept seems very Watson like even though I did not spot a reference to IBM’s much hyped smart system.

The less frequently a word is used, the greater the likelihood that word will be forgotten, disused, or tossed in the dictionary writer’s dust bin. Examples of words in trouble are:

  • dirty
  • guts
  • squeeze
  • stick
  • throw

I would suggest that IBM’s marketing corpus from the foundation of the company as a vendor of tabulating equipment right up to the PurePower name be analyzed. Well, I am no academic, and I am not sure that the University of Reading would win a popularity contest at IBM after predicting which of its product names will fall into disuse in the future. (I sure would like to see the analysis for Watson, however.)

My thought is that frequency of use analyses are useful. A fast computer is helpful. I am not sure about the embedded IBM commercial in the write up.

Stephen E Arnold, December 28. 2015

Google Drastically Slows Acquisition Spending

December 3, 2015

As Google becomes Alphabet, the company seems to be taking a new approach to its investments. Business Insider declares, “Google Slammed the Brakes on its Acquisition Machine, with the Lowest Deal-Making Since 2009.” The article references Google’s 10Q quarterly earnings report, and compares that quarter’s acquisition total of $250 million to the company’s speeding sprees of years past; see the post for details. Writer Alexai Oreskovic observes:

“The M&A slowdown comes as Google has transformed itself into the Alphabet holding company, which separates various Google projects, such as fiber-based internet access, and Nest into separate companies. It also comes as new CFO Ruth Porat has taken steps to make Google more disciplined about its spending, and to return some cash to shareholders through buybacks. Stock buybacks and slowing M&A — perhaps this is the new Google. Or perhaps Google is just taking a breather on its acquisitions to digest all the companies it has swallowed up over the years. Asked about the slowing M&A, a Google representative responded by email: ‘Acquisitions by their nature are inherently lumpy and don’t follow neat 9 month patterns.’”

Well, that’s true, I suppose, as far as it goes. We hope this turn to fiscal discipline does not portend trouble for Google/ Alphabet. What is the plan? We are curious to see where the company goes from here.

Cynthia Murrell, December 3, 2015

Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

Holy Cow. More Information Technology Disruptors in the Second Machine Age!

July 11, 2015

I read a very odd write up called “The Five Other Disruptors about to Define IT in the Second Machine Age.”

Whoa, Nellie. The second machine age. I thought we were in the information age. Dorky machines are going to be given an IQ injection with smart software. The era is defined by software, not machines. You know. Mobile phones are pretty much a commodity with the machine part defined by fashion and brand and, of course, software.

So a second machine age. News to me. I am living in the second machine age. Interesting. I thought we had the Industrial Revolution, then the boring seventh grade mantra of manufacturing, the nuclear age, the information age, etc. Now we are doing the software thing.

My hunch is that the author  of this strange article is channeling Shoshana Zuboff’s In the Age of the Smart Machine. That’s okay, but I am not convinced that the one, two thing is working for me.

Let’s look at the disruptors which the article asserts are just as common as the wonky key fob I have for my 2011 Kia Soul. A gray Kia soul. Call me exciting.

Here are the four disruptors that, I assume, are about to remake current information technology models. Note that these four disruptors are “about to define IT.” These are like rocks balanced above Alexander the Great’s troops as they marched through the valleys in what is now Afghanistan. A 12 year old child could push the rock from its perch and crush a handful of Macedonians. Potential and scary enough to help Alexander to decide to march in a different direction. Hello, India.

These disruptors are the rocks about to plummet into my information technology department. The department, I wish to point out, works from their hovels and automobiles, dialing in when the spirit moves them.

Here we go:

  • Big Data
  • Cloud
  • Mobile
  • Social

I am not confident that these four disruptors have done much to alter my information technology life, but if one is young, I assume that these disruptors are just part of the everyday experience. I see grade school children poking their smart phones when I take my dogs for their morning constitutional.

But the points which grabbed my attention were the “five other disruptors.” I had to calm down because I assumed i had a reasonable grasp on disruptors important in my line of work. But, no. These disruptors are not my disruptors.

Let’s look at each:

The Trend to NoOps

What the heck does this mean? In my experience, experienced operations professionals are needed even as some of the smart outfits I used to work with.

Agility Becomes a First Class Citizen

I did not know that the ability to respond to issues and innovations was not essential for a successful information technology professional.

Identity without Barriers

What the heck does this mean? The innovations in security are focused on ensuring that barriers exist and are not improperly gone through. The methods have little to do with an individual’s preferences. The notion of federation is an interesting one. In some cases, federation is one of the unresolved challenges in information technology. Mixing up security, “passwords,” and disparate content from heterogeneous systems is a very untidy serving of fruit salad.

Thinking about information technology after reading Rush’s book of farmer flummoxing poetry. Is this required reading for a mid tier consultant? I wonder if Dave Schubmehl has read it? I wonder if some Gartner or Forrester consultants have dipped into its meaty pages. (No pun intended.)

IT Goes Bi Modal?

What the heck does this mean again? Referencing Gartner is a sure fire way to raise grave concerns about the validity of the assertion. But bi-modal. Two modes. Like zero and one. Organizations have to figure out how to use available technology to meet that organization’s specific requirements. The problem of legacy and next generation systems defines the information landscape. Information technology has to cope with a fuzzy technology environment. Bi modal? Baloney.

The Second Machine Age

Okay, I think I understand the idea of a machine age. The problem is that we are in a software and information datasphere. The machine thing is important, but it is software that allows legacy systems to coexist with more with it approaches. This silly number of ages makes zero sense and is essentially a subjective, fictional, metaphorical view of the present information technology environment.

Maybe that’s why Gartner hires poets and high profile publications employ folks who might find an hour discussing the metaphorical implications of “bare ruined choirs.”

None of these five disruptions makes much sense to me.

My hunch is that you, gentle reader, may be flummoxed as well.

Stephen E Arnold, July 11, 2015

Library Design Improves

June 10, 2015

I like libraries. If you enjoy visiting them as well, navigate to “These Modern Libraries Look Like Alien Spaceships On The Inside.” Among the libraries featured are the Beinecke Rare Book and Manuscript Library (Yale), Bibliotheca Alexandrina, and Biblioteca España.

Stephen E Arnold, June 9, 2015

Advanced Analytics Are More Important Than We Think

February 3, 2015

Alexander Linden, one of Gartner’s research directors, made some astute observations about advanced analytics and data science technologies. Linden shared his insights with First Post in the article, “Why Should CIOs Consider Advanced Analytics?”

Chief information officers are handling more data and relying on advanced analytics to manage it. The data is critical gaining market insights, generating more sales, and retaining customers. The old business software cannot handle the overload anymore.

What is astounding is that many companies believe they are already using advanced analytics, when in fact they can improve upon their current methods. Advanced analytics are not an upgraded version of normal, descriptive analytics. They use more problem solving tools such as predictive and prescriptive analytics.

Gartner also flings out some really big numbers:

“One of Gartner’s new predictions says that through 2017, the number of citizen data scientists will grow five times faster than the number of highly skilled data scientists.”

This is akin to there being more people able to code and create applications than the skilled engineers with the college degrees. It will be a do it yourself mentality in the data analytics community, but Gartner stresses that backyard advanced analytics will not cut it. Companies need to continue to rely on skilled data scientists the interpret the data and network it across the business units.

Whitney Grace, February 03, 2014
Sponsored by ArnoldIT.com, developer of Augmentext

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta