Palantir Technologies Sparks a Controversial Metaphor

December 4, 2020

In an interview with a policeware/intelware vendor, I learned about a financial company’s view of Palantir Technologies. This is the 13 year old start up which recently went public. The company had an astounding 130 customers, about $600 million in revenue, and a modest $500 million in losses in 12 months.

Here’s the comment which I chased down in its original tweeter output glossiness:

image

The operative phrase is:

A full casino.

If I were a Palantirian unpacking boxes in Denver, Colorado, I would hit the Yellow Pages or the Seeing Stone. An attorney might have some thoughts about a malicious metaphor disseminated via the marvelous firm managed part time by a bearded CEO.

Palantir. A full casino. Whatever does that mean?

Stephen E Arnold, December 4, 2020

Jargon to Watch: Facebook Out Innovates with Wordage

December 4, 2020

I read “Facebook Splits Up Unit at Center of Contested Election Decisions.” The write up contains yet another management maneuver from the Oracular High School Science Club Management Methods. Feel free to ponder the article; I did not. Instead my attention was pinned by the arrow clear thinking expressed in this two word confection:

central integrity

Here’s the deck chair shuffling on the good ship USS Facebook:

Employees from Civic Integrity, who have been at the center of Facebook’s contested decisions on how to handle posts from politicians such as President Donald Trump and its influence in politically fragile countries like Myanmar, will now join teams in a bigger organization called Central Integrity under Facebook vice president Guy Rosen, according to the memos sent Wednesday and two current employees.

From civic to central integrity. Remarkable. This phrase has taken pride of place from revenge bedtime procrastination, execution management systems, and intersubjective process.

Kudos to the Facebook phrase creating team.

Stephen E Arnold, December 4, 2020

Cloud Management: Who Is Responsible When Something Goes Wrong?

December 4, 2020

I read “Deloitte Helps Build Evolving Kinetic Enterprises by Powering SAP on AWS.” Wow, I have a collection of buzzwords which I use for inspiration or for a good laugh. I love the idea of “evolving kinetic enterprises.” Let’s see. Many businesses are busy reacting to the global pandemic, social unrest, and financial discontinuities. But kinetic enterprises!

The write up explains via a quote from a consultant:

“The future, though, is all about built-to-evolve. And that’s exactly what the kinetic enterprises are. It’s really how we’re helping our clients [to] create the right technology infrastructures that evolve with their business.”

Okay, let’s put aside the reacting part of running a business today. These organizations are supposed to be “kinetic.” The word means in the military a thing that has kinetic energy like a bomb, a bullet, or a directed energy beam. Kinetic suggests motion, either forward or backward.

The kinetic enterprise is supposed to move, do killer stuff? Obviously companies do not want to terminate with extreme prejudice their customers. Hold that thought. Most don’t I assume although social media sparking street violence may be a trivial, secondary consequence. So, let’s go with most of the time.

Set the craziness of the phrase aside. Ignore the wonky consultant spin, the IBM-inspired SAP software maze, and the role of Amazon AWS. What about this question:

When this cloud management soufflé collapses down, who is responsible?

Am I correct in recalling that Deloitte had a slight brush with Autonomy. AWS went offline last week. And SAP, well, just ask a former Westinghouse executive how that SAP implementation worked out.

The message in the story is that:

  1. No consultant on earth will willingly accept responsibility for making a suggest that leads to a massive financial problem for a client. That’s why those reports include options. Clients decide what poison to sprinkle on their Insecure Burger.
  2. SAP has been dodging irate users and customers for while, since 1972. How is your TREX search system working? What about those automated roll up reports?
  3. Amazon AWS is a wonderful outfit. Sure there are thousands of functions, features, and options. When one goes off the rails, how does that problem get remediated? Does Mr. Bezos jump in?

The situation set forth in the article makes clear that each of these big outfits (Deloitte, AWS, and SAP) will direct the customer with a problem to some one else.

This is charmingly chracterized as a “No throat to choke” situation.

Stephen E Arnold, December 4, 2020

Semantic Scholar: Mostly Useful Abstracting

December 4, 2020

A new search engine specifically tailored to scientific literature uses a highly trained algorithm. MIT Technology Review reports, “An AI Helps You Summarize the latest in AI” (and other computer science topics). Semantic Scholar generates tl;dr sentences for each paper on an author’s page. Literally—they call each summary, and the machine-learning model itself, “TLDR.” The work was performed by researchers at the Allen Institute for AI and the University of Washington’s Paul G. Allen School of Computer Science & Engineering.

AI-generated summaries are either extractive, picking a sentence out of the text to represent the whole, or abstractive, generating a new sentence. Obviously, an abstractive summary would be more likely to capture the essence of a whole paper—if it were done well. Unfortunately, due to limitations of natural language processing, most systems have relied on extractive algorithms. This model, however, may change all that. Writer Karen Hao tells us:

“How they did it: AI2’s abstractive model uses what’s known as a transformer—a type of neural network architecture first invented in 2017 that has since powered all of the major leaps in NLP, including OpenAI’s GPT-3. The researchers first trained the transformer on a generic corpus of text to establish its baseline familiarity with the English language. This process is known as ‘pre-training’ and is part of what makes transformers so powerful. They then fine-tuned the model—in other words, trained it further—on the specific task of summarization. The fine-tuning data: The researchers first created a dataset called SciTldr, which contains roughly 5,400 pairs of scientific papers and corresponding single-sentence summaries. To find these high-quality summaries, they first went hunting for them on OpenReview, a public conference paper submission platform where researchers will often post their own one-sentence synopsis of their paper. This provided a couple thousand pairs. The researchers then hired annotators to summarize more papers by reading and further condensing the synopses that had already been written by peer reviewers.”

The team went on to add a second dataset of 20,000 papers and their titles. They hypothesized that, as titles are themselves a kind of summary, this would refine the model further. They were not disappointed. The resulting summaries average 21 words to summarize papers that average 5,000 words, a compression of 238 times. Compare this to the next best abstractive option at 36.5 times and one can see TLDR is leaps ahead. But are these summaries as accurate and informative? According to human reviewers, they are even more so. We may just have here a rare machine learning model that has received enough training on good data to be effective.

The Semantic Scholar team continues to refine the software, training it to summarize other types of papers and to reduce repetition. They also aim to have it summarize multiple documents at once—good for researchers in a new field, for example, or policymakers being briefed on a complex issue. Stay tuned.

Cynthia Murrell, December 4, 2020

A Facebook Promise: Good As Gold

December 3, 2020

Oops. A Facebook algorithm’s mistake is causing the company to offer apologies and refunds, we learn from CNBC’s article, “Facebook to Reimburse Some Advertisers After Miscalculating Effectiveness Data.” Citing a report from Ad Exchanger, writer Lucy Handley informs us:

“The company’s ‘conversion lift’ tool suffered a glitch that reportedly affected thousands of ads between August 2019 and August 2020. Facebook fixed the error in September and is now offering a credit to clients ‘meaningfully affected’ by the bug. Conversion lift helps brands understand how ads lead to sales, using a ‘gold-standard methodology’ that links ads on Facebook’s platforms, including Instagram, to business performance, according to an explanation of the tool on Facebook’s website. The free tool shows ads to separate test and control groups and then compares sales conversions for each. Then, based on the results of the study, an advertiser can decide how much to spend on the social network.”

Though the error was discovered and fixed in September, the company is just now getting around to informing clients. According to Facebook, a “small number” of advertisers were affected, though what that behemoth considers a small number is unclear. Handley reminds us:

“This isn’t the first time Facebook has admitted mistakes in reporting. In September 2016, it said it overestimated the average time people spent viewing video ads over a two-year period, and in 2017 a report found that Facebook claimed to reach more people in some U.S. states and cities than official population data said existed in those areas.”

Yep, Facebook is starting 2021 with its true colors flying. I suppose it is nice to see some things remain consistent.

Cynthia Murrell, December 3, 2020

Want to Manipulate Humans? Try These Hot Buttons

December 3, 2020

Okay, thumb typing marketers, insights from academia. Navigate to “We Are All Behavioral, More or Less: A Taxonomy of Consumer Decision Making.” The write up is available from Dartmouth, home of behavioral economists and psychologists and okay pizza.

The write up is 70 pages in length and chock full of jargon and academic thinking. Nevertheless, the author, one Victor Stango, reveals some suggestive information.

Here are a couple of examples:

Table 3. Correlations among behavioral biases, and between biases and other decision inputs offers insight into pairings of bias factors

Table 5. Rotated 8-factor models and loadings of decision inputs on common factors provides a “look up table” with values to help guide a sales pitch

The list of hot button factors includes:

  • Present bias
  • Choice type
  • Risk biases
  • Confidence
  • Math bias
  • Attention
  • Patience vs. risk aversion
  • Cognitive skills
  • Personality

Net net: Manipulate biases by combining factors. Launch those online marketing campaigns via social media with confidence, p-value lovers.

Stephen E Arnold, December 2, 2020

Why Investigative Software Is Expensive

December 3, 2020

In a forthcoming interview, I explore industrial-strength policeware and intelware with a person who was Intelligence Officer of the Year. In that review, which will appear in a few weeks, the question of cost of policeware and intelware is addressed. Systems like those from IBM’s i2, Palantir Technologies, Verint, and similar vendors are pricey. Not only is there a six or seven figure license fee, the client has to pay for training, often months of instruction. Plus, these i2-type systems require systems and engineering support. One tip off of to the fully loaded costs is the phrase “forward deployed engineer.” The implicit message is that these i2-type systems require an outside expert to keep the digital plumbing humming along. But who is responsible for the data? The user. If the user fumbles the data bundle, bad outputs are indeed possible.

What’s the big deal? Why not download Maltego? Why not use one of the $100 to $3,000 solutions from jazzy startups by former intelligence officers? These are “good enough”, some may assert. One facet of the cost of industrial strength systems available to qualified licensees is a little appreciated function: Dealing with data.

Keep Data Consistency During Database Migration” does a good job of explaining what has to happen in a reliable, consistent way when one of the multiple data sources contributes “new” or “fresh” data to an intelware or policeware system. The number of companies providing middleware to perform these functions is growing. Why?

Most companies wanting to get into the knowledge extraction business have to deal with the issues identified in the article. Most organizations do not handle these tasks elegantly, rapidly, or accurately.

Injecting incorrect, stale, inaccurate data into a knowledge centric process like those in industrial strength policeware causes those systems to output unreliable results.

What’s the consequence?

Investigators and analysts learn to ignore certain outputs.

Why? The outputs can be more serious than a flawed diagram whipped up by an MBA who worries only about the impression he or she makes on a group of prospects attending a Zoom meeting.

Data consistency is a big deal.

Stephen E Arnold, December 2, 2020

AWS Panorama: Such a Happy Name!

December 2, 2020

AWS Announces Panorama, a Device That Adds Machine Learning Technology to Any Camera” caught my attention. (Now don’t think I ignored Amazon’s work monitoring system called Monitron, a wonderful name, very Robo Cop like. I have not.) I noted the word “all” in the title. Very wide in scope. Appropriate in an era of data harvesting. Also, I quite liked the “appliance” moniker. What could me more appropriate for a company with more than one million employees, oodles of government contracts with assorted nation states, and customers hungry to know as much as possible about humanoids and other entities of interest? A toaster, a data Hoover, a device to exploit the info-pressure differential between those with the gizmo and those monitored by the gizmo.

The write up states:

…enterprises continue to clamor for new machine learning-enabled video recognition technologies for security, safety and quality control. Indeed, as the COVID-19 pandemic drags on, new protocols around building use and occupancy are being adopted to not only adapt to the current epidemic, but plan ahead for spaces and protocols that can help mitigate the severity of the next one.

And law enforcement and intelligence applications? Whoops. Not included in the write up nor in the AWS blog post. Amazon is not in the policeware and intelware business. At least, that’s what I have been told.

Stephen E Arnold, December 2, 2020

Some US Big Tech Outfits Say Laisse Tomber

December 2, 2020

The trusted “real news” outfit Thomson Reuters published “Amazon, Apple Stay Away from New French Initiative to Set Principles for Big Tech.” Quelle surprise! The “principle” is the silly notion of getting big US technology companies to pay their taxes, fair taxes. Incroyable? Companies not getting with the program allegedly include Apple, Facebook, Google, and Microsoft. These four firms are likely to perceive the suggestion of fairness as a demonstration of flawed logic. It is possible that the initiative may become a cause célèbre because money. France is a mere country anyway.

Stephen E Arnold, December 2, 2020

Ah, Chatbots. Unfortunately, Inevitable Because Who Wants to Support Customers?

December 2, 2020

Lest one think AI is here to make our lives easier, one should think again. Though the technology may bring new capabilities and insights, users must put in work and surmount frustration to get results. Bizcommunity.com discusses “The Unsuspected Stumbling Blocks of AI for Customer Experience.” Writer Mathew Conn specifically examines the use of chatbots here. He writes:

“While chatbots successfully enable one-to-one conversations with customers through automated interfaces and are a great way to deliver immediate responses, they are not right for any and all customer interactions. The first, and possibly most important failure of chatbots, is a direct result of the organization in question not identifying what customer interactions are right for enhancement with chatbots. … Because chatbots use open source libraries, most won’t be customized to the organization’s specific industry or customers. Pre-trained bots will be limited to their pre-programmed decision path and are limited by the designer or programmer’s understanding of customer behaviors and requests. While chatbots don’t reason, smarter bots can cope better with some language nuances; however, without human judgment, chatbot accuracy will always be limited. Pre-trained chatbots follow a structured conversation plan and can lose the flow fairly easily. With more access to customer history and data, smarter chatbots can ‘learn’ customer preferences. However, to keep context, chatbots need every possible response to every possible customer request.”

The more complex the interaction, the more likely customers will want to converse with a human. It can be useful to begin interactions with a chatbot then shift to a human worker, but a problem can occur when such a shift means changing platforms from a chat window to phone or email. If the company does not maintain consistency across all its channels, the customer must restart their explanation from the beginning. This does not make for a happy customer or, by extension, a good reputation for the business.

Chatbots are not the only AI function that is less of a panacea than vendors would like us to believe. Before investing in any AI solution, businesses should do their research and make certain they understand what they are getting, whether it will truly address their unique needs, and how to make the most of it.

Just cut costs and move on.

Cynthia Murrell, December 2, 2020

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta