Xnor Touch Points

November 29, 2019

If you are not familiar with Xnor.ai, navigate to the company’s Web site and read the cultural information. There is a reference to diversity, the company being a “high growth start up,” and something called “ethics touch points.”

I think one of the touch points is not honoring deals with licensee, but my information comes from a razzle dazzle publication. “Wyze’s AI-Powered Person Detection Feature Will Temporarily Disappear Next Year” asserts:

Wyze’s security cameras will temporarily lose their person detection feature in January 2020 after the AI startup it partnered with on the feature abruptly terminated their agreement. In a post on its forums, Wyze said that its agreement with Xnor.ai included a clause allowing the startup to terminate the contract “at any moment without reason.”

There’s a reference to “mistakes,” in the tradition of 21st century information, there’s no definition of mistake.

I noted this passage: passage “Wyze’s low prices come with risks.”

Back up.

What’s an ethical touch point? Xnor.ai states:

Xnor is actively engaging in conversations around the ethical implications of AI within our society through “ethics touch points” that exist within our normal working patterns. These touch points allow is to actively review specific AI use cases and make informed decisions without compromising the speed in which we operate as a start-up.

Maybe recognizing a face is not good? When is recognizing a face good? I struggle with the concept of ethics mostly because I am flooded with examples of ethical crossroads each day. Was a certain lawyer in Ukraine for himself or for others? Was the fuselage failure of a 777 a mistake or a downstream consequence of an ethical log jam? Was the disappearance of certain map identifiers a glitch or an example of situational ethical analysis?

With about $15 million in funding, Xnor.ai the two year old company is an interesting one. What’s interesting is that Madrona Ventures may find itself with some thorns in its britches after pushing through the thicket of ethical touch points.

In 2017, Pymnts.com ran a story with this headline: “AI Startup Xnor.ai Raises $2.6M To Bring AI To All Devices.” See the word “all.”

That should have come with a footnote maybe? Other possibilities are: [a] the technology does not work, [b] Wyze did not pay a bill, [c] Xnor.ai has done what Aristotle did ineffectively.

Stephen E Arnold, November 29, 2019

Potpourr-AI

November 24, 2019

Here is a useful roundup of information for those interested in machine learning. Forbes presents a thorough collection of observations, citing several different sources, about the impact of deep learning on the AI field in, “Amazon Saw 15-Fold Jump in Forecast Accuracy with Deep Learning and Other AI Stats.” As the title indicates, under the heading AI Business Impact, writer Gil Press reports:

“When Amazon switched from traditional machine learning techniques to deep learning in 2015, it saw a 15-fold increase in the accuracy of its forecasts, a leap that has enabled it to roll-out its one-day Prime delivery guarantee to more and more geographies; MasterCard has used AI to cut in half the number of times a customer has their credit card transaction erroneously declined, while at the same time reducing fraudulent transactions by about 40%; and using predictive analytics to spot cyber attacks and waves of fraudulent activity by organized crime groups helped Mastercard’s customers avoid some $7.5 billion worth of damage from cyberattacks in just the past 10 months [Fortune]”

A couple other examples under AI Business Impact include the Press Association’s RADAR news service, which generated 50,000 local news stories in three months with the help of but six human reporters; and the Rotterdam Roy Dutch Shell refinery’s sensor data analysis that helped them avoid spending about $2 million in maintenance trips.

Press arranges the rest of his AI information into several more headings: AI Business Adoption, where we learn nearly all respondents to an IFS survey of business leaders have plans to implement AI functionality; AI Consumer Attitudes, where he tells us a pessimistic 10% of Mozilla-surveyed consumers think AI will make our lives worse; AI Research Results, under which is reported that AI can now interpret head CT scans as well as a highly trained radiologist; AI Venture Capital Investments; AI Market Forecasts; and AI Quotable Quotes. The article concludes with this noteworthy quotation:

“‘To make deliberate progress towards more intelligent and more human-like artificial systems… we need to be able to define and evaluate intelligence in a way that enables comparisons between two systems, as well as comparisons with humans’—Francois Chollet

We recommend interested readers check out the article for its many more details.

Cynthia Murrell, November 22, 2019

AI: Semi-Capable? Absolutely!

November 20, 2019

Reporter Jeremy Kahn at Fortune ponders an important question—“A.I. Is Everywhere—But Where Is Human Judgment?” Kahn recently spent a week at the Web Summit in Lisbon, where he learned just how much machine learning has taken over at many companies. From product recommendations to delivery-drone operation to the prevention of crime, algorithms are making many real-world decisions. For example, Amazon CTO Werner Vogels explained at the conference that machine learning is at the heart of “absolutely everything” at his company. This takeover really took off in 2015, when Amazon took up deep learning and found its forecasts became 15 times more accurate. The article also notes that Mastercard uses predictive analytics to foil cyber attacks and fraudulent activity by organized crime.

After the rah-rah conference, however, Kahn found some sobering news in a report from the National Transportation Safety Board. He writes:

“While Web Summit was all about the promise of A.I., this news from last week ought to give people pause: the National Transportation Safety Board released its preliminary report investigating how one of Uber’s self-driving cars came to strike and kill 49-year old Elaine Herzberg as she crossed the road in Tempe, Arizona, last year. The NTSB found in its ‘Vehicle Automation Report’ that while the car’s sensors did detect Herzberg six seconds before hitting her, the self-driving system failed to correctly classify her as a pedestrian, in part because Uber had trained its computer vision system to only expect pedestrians in designated cross-walks. What’s more, the agency concluded that Uber’s engineers had programmed the car to only brake or take evasive maneuvers if its computer systems were highly confident that a collision was likely. Humans decided to train the system in this way and set these tolerances. Most likely, this was done to prioritize the comfort of Uber’s passengers, who would have found sudden braking and unexpected swerves annoying and alarming. And it’s ultimately these human decisions that doomed Herzberg.”

The piece concludes with a simple but important suggestion—anyone involved with deploying A.I. with real-world impact should read the NTSB report. That is a good idea.

Cynthia Murrell, November 20, 2019

UAE: More AI, Less PE

November 9, 2019

The field of artificial intelligence has reached a milestone—the first graduate-level university dedicated to it is set to open next year. Interesting Engineering reports, “World’s First AI University Has More than 3200 Applicants Already.” The Mohamed bin Zayed University of Artificial Intelligence is being built in Abu Dhabi. It is named for the country’s crown prince, who is big on using science to build up his nation’s human capital. The school received those thousands of applications in its first week of admissions. The aspiring grad students are located around the world, but most are in the UAE, Saudi Arabia, Algeria, Egypt, India, and China. It is no surprise interest is so high—students will get a sweet deal. Reporter Donna Fuscaldo writes:

“The school aims to create a new model of academia and research for AI and to ‘unleash AI’s full potential.’ Students get access to some of the most advanced AI systems as part of the program. Students can earn a Master of Science (MSc) and PhD level programs in machine learning, computer vision, and natural language processing. All admitted students get a full scholarship, monthly allowance, health insurance, and accommodation. The first class will commence in September 2020.”

The time is ripe for such an institution. With AI now permeating nearly every industry, research firm PwC Global predicts that by 2030 it will have a $25.7 trillion impact on the global economy ($6.6 trillion from increased productivity and $9.1 trillion from “consumption-side effects”) and provide a 26% GDP boost for local economies. It is no wonder many students are eager to get in on the ground floor.

Cynthia Murrell, November 09, 2019

Visual Data Exploration via Natural Language

November 4, 2019

New York University announced a natural language interface for data visualization. You can read the rah rah from the university here. The main idea is that a person can use simple English to create complex machine learning based visualizations. Sounds like the answer to a Wall Street analyst’s prayers.

The university reported:

A team at the NYU Tandon School of Engineering’s Visualization and Data Analytics (VIDA) lab, led by Claudio Silva, professor in the department of computer science and engineering, developed a framework called VisFlow, by which those who may not be experts in machine learning can create highly flexible data visualizations from almost any data. Furthermore, the team made it easier and more intuitive to edit these models by developing an extension of VisFlow called FlowSense, which allows users to synthesize data exploration pipelines through a natural language interface.

You can download (as of November 3, 2019, but no promises the document will be online after this date) “FlowSense: A Natural Language Interface for Visual Data Exploration within a Dataflow System.”

DarkCyber wants to point out that talking to a computer to get information continues to be of interest to many researchers. Will this innovation put human analysts out of their jobs.

Maybe not tomorrow but in the future. Absolutely. And what will those newly-unemployed people do for money?

Interesting question and one some may find difficult to consider at this time.

Stephen E Arnold, November 4, 2019

 

Deepfake Detection: Unsolvable

November 3, 2019

CEO of Anti-Deepfake Software Says His Job Is Ultimately a Losing Battle” describes what may be an unsolvable problem. Manipulated content may be in the category of the Millennium Prize Problems, just more complicated. The slightly gloomy write up quotes the founder of Amber Video (Shamai Allibhai):

“Ultimately I think it’s a losing battle. The whole nature of this technology is built as an adversarial network where one tries to create a fake and the other tries to detect a fake. The core component is trying to get machine learning to improve all the time…Ultimately it will circumvent detection tools.

The newspaper publishing this observation did not include Jorge Luis Borges’ observation made in the Paris Review in 1967:

Really, nobody knows whether the world is realistic or fantastic, that is to say, whether the world is a natural process or whether it is a kind of dream, a dream that we may or may not share with others.

But venture funding makes the impossible appear to be possible until it is not.

Stephen E Arnold, November 3, 2019

Automating Machine Learning: Works Every Time

October 24, 2019

Automated machine learning, or AutoML, is the natural next step in the machine learning field. The technique automates the process of creating machine learning models, saving data scientists a lot of time and frustration. Now, InfoWorld reports, “A2ML Project Automates AutoML.” Automation upon automation, if you will.

An API and command-line tools make up the beta-stage open source project from Auger.AI. The company hopes the project will lead to a common API for cloud-based AutoML services. The API naturally works with Auger.AI’s own API, but also with Google Cloud AutoML and Azure AutoML. Writer Paul Krill tells us:

“Auger.AI said that the cloud AutoML vendors all have their own API to manage data sets and create predictive models. Although the cloud AutoML APIs are similar—involving common stages including importing data, training models, and reviewing performance—they are not identical. A2ML provides Python classes to implement this pipeline for various cloud AutoML providers and a CLI to invoke stages of the pipeline. The A2ML CLI provides a convenient way to start a new A2ML project, the company said. However, prior to using the Python API or the CLI for pipeline steps, projects must be configured, which involves storing general and vendor-specific options in YAML files. After a new A2ML application is created, the application configuration for all providers is stored in a single YAML file.”

Krill concludes his write-up by supplying this link for interested readers to download A2ML from GitHub for themselves.

Cynthia Murrell, October 24, 2019

Bias: Female Digital Assistant Voices

October 17, 2019

It was a seemingly benign choice based on consumer research, but there is an unforeseen complication. TechRadar considers, “The Problem with Alexa: What’s the Solution to Sexist Voice Assistants?” From smart speakers to cell phones, voice assistants like Amazon’s Alexa, Microsoft’s Cortana, Google’s Assistant, and Apple’s Siri generally default to female voices (and usually sport female-sounding names) because studies show humans tend to respond best to female voices. Seems like an obvious choice—until you consider the long-term consequences. Reporter Olivia Tambini cites a report UNESCO issued earlier this year that suggests the practice sets us up to perpetuate sexist attitudes toward women, particularly subconscious biases. She writes:

“This progress [society has made toward more respect and agency for women] could potentially be undone by the proliferation of female voice assistants, according to UNESCO. Its report claims that the default use of female-sounding voice assistants sends a signal to users that women are ‘obliging, docile and eager-to-please helpers, available at the touch of a button or with a blunt voice command like “hey” or “OK”.’ It’s also worrying that these voice assistants have ‘no power of agency beyond what the commander asks of it’ and respond to queries ‘regardless of [the user’s] tone or hostility’. These may be desirable traits in an AI voice assistant, but what if the way we talk to Alexa and Siri ends up influencing the way we talk to women in our everyday lives? One of UNESCO’s main criticisms of companies like Amazon, Google, Apple and Microsoft is that the docile nature of our voice assistants has the unintended effect of reinforcing ‘commonly held gender biases that women are subservient and tolerant of poor treatment’. This subservience is particularly worrying when these female-sounding voice assistants give ‘deflecting, lackluster or apologetic responses to verbal sexual harassment’.”

So what is a voice-assistant maker to do? Certainly, male voices could be used and are, in fact, selectable options for several models. Another idea is to give users a wide variety of voices to choose from—not just different genders, but different accents and ages, as well. Perhaps the most effective solution would be to use a gender-neutral voice; one dubbed “Q” has now been created, proving it is possible. (You can listen to Q through the article or on YouTube.)

Of course, this and other problems might have been avoided had there been more diversity on the teams behind the voices. Tambini notes that just seven percent of information- and communication-tech patents across G20 countries are generated by women. As more women move into STEM fields, will unintended gender bias shrink as a natural result?

Cynthia Murrell, October 17, 2019

Machine Learning Tutorials

October 16, 2019

Want to know more about smart software? A useful list of instructional, reference, and learning materials appears in “40+ Modern Tutorials Covering All Aspects of Machine Learning.”

Materials include free books about machine learning to lists of related material from SAP. DarkCyber noted that a short explanation about how to download documents posted on LinkedIn is included. (This says more about LinkedIn’s interesting approach to content than DarkCyber thinks the compiler of the list expresses.)

Quite useful round up.

Stephen E Arnold, October 16, 2019

Chatbot: Baloney Sliced and Served as Steak

October 15, 2019

DarkCyber noted “The Truth about Chatbots: Five Myths Debunked.” Silver bullets are keenly desired. Use smart software to eliminate most of the costs of customer support. (Anyone remember the last time customer support was painless, helpful, and a joy?)

IT Pro Portal seems to be aware that smart software dispensing customer service is in need of a bit of reality-marketing mustard. My goodness. Interesting. What’s next? Straight talk about quantum computing?

The write up identifies five “myths.” Viewing these from some sylvan viewshed, the disabused “myths” are:

  1. You will need multiple bots. Now multiple bots increase the costs of eliminating most humans from customer support and other roles. Yep, expensive.
  2. Humans won’t go away. That means sick days, protests, healthcare, and other peculiarly human costs are here to stay. Shocker! Smart software is not as smart as the pitch decks assert?
  3. Bots can do a lot. View this “myth” in the context of item 1.
  4. Bots require a support staff. Of course not. Buy a bot service and everything is just peachy.
  5. Bots don’t mean lock in.

Now this dose of reality is a presentation of baloney and hand waving.

What is the truth about chatbots? Are they works in progress? Are they cost cutting mechanisms? Are they fairly narrow demonstrations of machine learning?

The reality is that bots, like customer service, are not yet as good as the marketers, PR professionals, and managers of firms selling bots assert.

Think about these five myths. It’s not one bot. It’s multiple bots. Bots can’t do human stuff as well as some humans. Bots do many things not so well. Rely on providers; you can trust vendors, right? Don’t worry about lock in even though the goal of bot providers is to slap on those handcuffs.

To get a glimpse of unadulterated rah rah cheerleading, check out “Robots Are Catching Up to Humans in the Jobs Race.” That write up states:

In real terms, the price for an industrial robot has fallen by more than 60% in 20 years. They also get better as they get cheaper.

What’s not to like? Better, faster, cheaper.

Stephen E Arnold, October 15, 2019

Next Page »

  • Archives

  • Recent Posts

  • Meta