AI: The Facebook View for the Moment

February 21, 2019

We get some insight into the current trajectory of AI from Fortune’s article, “Facebook’s Chief A.I. Scientist Yann LeCun On the Future of Computer Chips, Lawnmowers, and Deep Learning.” The write-up points to a talk on AI hardware LeCun gave at the recent International Solid-State Circuits Conference in San Francisco.

Writer Jonathan Vanian highlights three points. First, he notes the advent of specialized chips designed to save energy, which should facilitate the use of more neural networks within data centers. This could mean faster speech translation, for example, or more effective image analysis. The tech could even improve content moderation, a subject much on Facebook’s mind right now. Then there are our “smart” devices, which can be expected to grow more clever as their chips get smaller. For instance, Vanian envisions a lawn mower that could identify and pull weeds. He notes, though, that battery capacity is another conundrum altogether.

Finally, we come to the curious issue of “common sense”—so far, AIs tend to fall far short of humans in that area. We’re told:

“Despite advances in deep learning, computers still lack common sense. They would need to review thousands of images of an elephant to independently identify them in other photos. In contrast, children quickly recognize elephants because they have a basic understanding about the animals. If challenged, they can extrapolate that an elephant is merely a different kind of animal—albeit a really big one. LeCun believes that new kinds of neural networks will eventually be developed that gain common sense by sifting through a smorgasbord of data. It would be akin to teaching the technology basic facts that it can later reference, like an encyclopedia. AI practitioners could then refine these neural networks by further training them to recognize and carry out more advanced tasks than modern versions.”

The chips to facilitate that leap are not yet on the market, of course. However, LeCun seems to believe they will soon be upon us. I do hope so; perhaps these super chips will bring some much needed sense to our online discourse.

Cynthia Murrell, February 21, 2019

The Balance: Smart Software and Humanoid

February 13, 2019

Micro-blogging super sphere, Tumblr, recently made headlines when it declared it would crack down on adult content. This is obviously not a small task, so the company harnessed as much deep learning AI as it could. The problems started instantly: pictures of boots and jeans were getting flagged the same as pornography. We got the full picture from a recent The Next Web story, “The Challenges of Moderating Content with Deep Learning.”

According to the story:

“Obviously, the folks at Tumblr realize that there are distinct limits to the capabilities of deep learning, which is why they’re keeping humans in the loop. Now, the question is, why does a technology that is as good as—or even better than—humans at recognizing images and objects need the help to make a decision that any human could do without much effort?”

This is a similar problem that the defense and intelligence communities are struggling through. It’s funny when boots get mislabeled as pornography, but what if your deep learning software mislabels satellite images regarding hostile threats? UCLA researchers recently declared they found the limitations of deep learning and we are far closer than most folks guessed. The solution is good news for us humans, because living-breathing oversight is really still the only solution. Meaning deep learning AI is merely one tool out of many we will continue to use.

Patrick Roland, February 13, 2019

IBM Debate Contest: Human Judges Are Unintelligent

February 12, 2019

I was a high school debater. I was a college debater. I did extemp. I did an event called readings. I won many cheesey medals and trophies. Also, I have a number of recollections about judges who shafted me and my team mate or just hapless, young me.

I learned:

Human judges mean human biases.

When I learned that the audience voted a human the victor over the Jeopardy-winning, subject matter expert sucking, and recipe writing IBM Watson, I knew the human penchant for distortion, prejudice, and foul play made an objective, scientific assessment impossible.

ibm debate

Humans may not be qualified to judge state of the art artificial intelligence from sophisticated organizations like IBM.

The rundown and the video of the 25 minute travesty is on display via Engadget with a non argumentative explanation in words in the write up “IBM AI Fails to Beat Human Debating Champion.” The real news report asserts:

The face-off was the latest event in IBM’s “grand challenge” series pitting humans against its intelligent machines. In 1996, its computer system beat chess grandmaster Garry Kasparov, though the Russian later accused the IBM team of cheating, something that the company denies to this day — he later retracted some of his allegations. Then, in 2011, its Watson supercomputer trounced two record-winning Jeopardy! contestants.

Yes, past victories.

Now what about the debate and human judges.

My thought is that the dust up should have been judged by a panel of digital devastators; specifically:

  • Google DeepMind. DeepMind trashed a human Go player and understands the problems humanoids have being smart and proud
  • Amazon SageMaker. This is a system tuned with work for a certain three letter agency and, therefore, has a Deep Lens eye to spot the truth
  • Microsoft Brainwave (remember that?). This is a system which was the first hardware accelerated model to make Clippy the most intelligent “bot” on the planet. Clippy, come back.

Here’s how this judging should have worked.

  1. Each system “learns” what it takes to win a debate, including voice tone, rapport with the judges and audience, and physical gestures (presence)
  2. Each system processes the video, audio, and sentiment expressed when the people in attendance clap, whistle, laugh, sub vocalize “What a load of horse feathers,” etc.
  3. Each system generates a score with 0.000001 the low and 0.999999 the high
  4. The final tally would be calculated by Facebook FAIR (Facebook AI Research). The reason? Facebook is among the most trusted, socially responsible smart software companies.

The notion of a human judging a machine is what I call “deep stupid.” I am working on a short post about this important idea.

A human judged by humans is neither just nor impartial. Not Facebook FAIR.

An also participated award goes to IBM marketing.

participant meda

IBM snagged an also participated medal. Well done.

Stephen E Arnold, February 13, 2019

Filtering for Fuzziness the YouTube Way

February 11, 2019

Software examines an item of content. The smart part of the software says, “This is a bad item.” Presumably the smart software has rules or has created rules to follow. So far, despite the artificial intelligence hyperbole, smart software is competent in certain narrow applications. But figuring out if an object created by a human, intentionally or unintentionally, trying to create information which another finds objectionable is a tough job. Even humans struggle.

For example, a video interview — should one exist — of Tim O’Reilly explains “The Fundamental Problem with Silicon Valley’s Favorite Strategy” could be considered offensive to some readers and possibly to  to practitioners of “blitz growth”. When money is at stake along with its sidekick power, Mr. O’Reilly could be viewed as crossing “the line.”

How would YouTube handle this type of questionable content? Would the video be unaffected? Would it be demoted because it crossed “the line” because unfettered capitalism is the go to business model for many companies, including YouTube’s owner? If flagged, what happens to the video?

The Hexus article “YouTube Video Promotion AI Change Is a “Historic Victory” may provide some insight into my hypothetical example which does not involve hate speech, controlled substances, trafficking, and other allegedly “easy to resolve” edge cases.

I noted this statement:

The key change being implemented by YouTube this year is in the way it “can reduce the spread of content that comes close to – but doesn’t quite cross the line of – violating our Community Guidelines“. Content that “could misinform users in harmful ways,” will find its influence reduced. Videos “promoting a phony miracle cure for a serious illness, claiming the earth is flat, or making blatantly false claims about historic events like 9/11,” will be affected by the tweaked recommendation AI, we are told.YouTube is clear that it won’t be deleting these videos, as long as they comply with Community Guidelines. Furthermore, such borderline videos will still be featured for users that have the source channels in their subscriptions.

I think this means, “Link buried deep in the results list.” Now fewer and fewer users of search systems dig into the subsequent pages of possibly relevant hits. That’s why search engine optimization people are in business. Relevance and objectivity are of zero importance. Appearing at the top of a results list, preferable as the first result is the goal of some SEO experts. Appearing deep in a results list generates almost zero traffic.

The Hexus write up continued:

At the weekend former Google engineer Guillaume Chaslot admitted that he helped to build the current AI used to promote recommended videos on YouTube. In a thread of Tweets, Chaslot described the impending changes as a “historic victory”.His opinion comes from seeing and hearing of people falling down the “rabbit hole of YouTube conspiracy theories, with flat earth, aliens & co”.

So what? The write up points out:

Unfortunately there is an asymmetry with BS.

When monopolies decide, what happens?

Certainly this is a question which warrants some effort on the part of graduate students to answer. The companies involved may not be the optimal source of accurate information.

Stephen E Arnold, February 11, 2019

Hate Speech Detection

February 1, 2019

Hate speech runs rampant on the Internet, especially on social media and Web sites. Trying to contain hate speech is like trying to drain the ocean with a garden hose. Several tech companies are trying to reign in the hate speech, but their attempts stink. Digital Trends focuses on how hate speech AI technology is in the pits in the article, “Current Tech For Detecting Hate Speech Is Woefully Inadequate, Researchers Find.

The problem is not the tech companies, but their technology. Researchers from the Aalto University in Finland analyzed hate detection tools. They discovered that none of the tools could agree on what qualifies as hate speech and that they are stupid. The hate detection tools were easily fooled with typos and letter substitution. Humans are still needed to interpret the true meaning of words and their context. For example:

“The researchers next demonstrated how all seven systems could be easily fooled by simple automatic text transformation attacks — such as making small changes to words, introducing or removing spaces, or adding unrelated words. For example, adding the word “love” into an otherwise hate-filled message confuses detection systems. These tricks were capable of fooling both straightforward keyword filters and more complex A.I. systems, based on deep-learning neural network architectures.”

Computers still cannot compete with humans when it comes to understanding and interpreting human emotions. Hate speech detection will improve as more developers research and experiment with the AI.

Whitney Grace, February 1, 2019

Fact Checkers Are Key to Next Gen Data

January 30, 2019

Misinformation has been a plague on the internet, but also on those working in law enforcement and intelligence. It has never been easy to trust a tip or a hunch, but now with the confusion of data it’s growing more impossible. Facebook’s Cheryl Sandberg recently discussed how her company, arguably the most plague-ridden, is attacking this issue. We found out more from the Tech Crunch article, “Stung by Criticism, Facebook’s Sandberg Outlines New Plans to Tackle Misinformation.”

According to the story:

“She said Facebook was now working with fact checkers around the world and had tweaked its algorithm to show related articles allowing users to see both sides of a news story that is posted on the platform. It was also taking down posts which had the potential to create real-world violence.”

The most interesting part of that quote involves fact checkers. A term that remains a holdover from the days when journalism was king, it reminds us that the role of humans is not outdated. Many are calling for the integration of human fact checkers amidst the ranks of AI and deep learning. This feels like the sensible direction, no matter if you work for Twitter or the CIA, because facts matter. Period.

Patrick Roland, January 30, 2019

Smart Software: Maybe Not What It Seems

January 27, 2019

Fast computers, memory, and bandwidth can make stupid software look smart. That’s one take away from Big Think’s AI debunker “Why A.I. Is a Big Fat Lie.” Marketers at the likes of IBM, Palantir Technologies, and similar companies are likely to take an opposing view. These firms’ software are magical, reduce the time required to make sense of information, and deliver the “fix” in to the “find, fix, and finish” crowd.

Among the weak spots in the AI defenders’ suit of armor are:

  • AI as a buzzword is “BS”. I assume this acronym does not mean Beyond Search
  • Machine learning is one thing but it is not autonomous. Humans are needed
  • AI won’t terminate me.

The article tackles talking computers and fancy concepts like neural nets.

I learned:

There’s literally no meaningful definition whatsoever. AI poses as a field, but it’s actually just a fanciful brand. As a supposed field, AI has many competing definitions, most of which just boil down to “smart computer.” I must warn you, do not look up “self-referential” in the dictionary. You’ll get stuck in an infinite loop.

The problem is that venture capitalists desperately want a next big thing, lots of money, and opportunities to give talks at Davos. Therefore, smart software is, by golly, going down the bullet train’s rails.

The entrepreneurs who often believe that their innovation has cracked the AI problem have to tell the world. Enter marketers, PR people, biz dev types with actual suits or sport jackets. These folks cheer for the smart software team.

Finally, there are the overwhelmed, confused, and panicked software procurement teams who have to find a way to cut costs and improve efficiency, yada yada yada. The objective is to acquire something new, study it, realign, and repeat the process. Ah, complex smart software. A thing of beauty, right?

Take a look at this Big Think article. Interesting stuff.

Stephen E Arnold, January 27, 2019

That Good Old AI Transition

January 24, 2019

At a recent industry event, The Drum Future of Marketing, IBM’s Jeremy Waite discussed the use of AI services in business. “Digital Transformation Takes Around Four Years and 85% of Them Fail,” The Drum reports. IBM should know about transformation. And failure. Just ask Watson.

Writer Danielle Gibson does note that Waite acknowledges IBM has been bad at communicating what its AI can (and cannot) do. He also shared some insights into the process of transitioning into a company that embraces AI tech. Gibson writes:

“But before we realize this [AI revolution], AI and how it is used in businesses itself needs to mature, said Waite. He reminded that only about 3% of the industry is using AI. Looking ahead, within 18 months to three years, Waite expects this figure to rise a whopping 28%. ‘Particularly when you look at healthcare. It takes such a long time to mature and a lot of it is trying to educate the marketplace on what it is and isn’t,’ he said. There is a competitive element stopping players from talking about some of the incredible projects being driven by AI. Waite added: ‘Any digital transformation project is going to take around four years, and 85% of them fail. That’s the biggest challenge we have, trying to educate people about what it is that most people in the industry don’t want to share.’”

The article touches on what does and does not qualify as AI, and assures us the technology is expected to create more jobs than it eliminates by 2020. We shall see.

Cynthia Murrell, January 24, 2019

Google: Trolls and Love

January 24, 2019

Internet trolls are as old as the Internet. They are annoying, idiotic, and sad individuals. People are getting tired of Internet trolls. While it is best to ignore them, some trolls take things to the next level, so they need to be seriously dealt with. Google, Twitter, Facebook, and other technology companies are implementing AI to detect toxic comments and hate speech. Unfortunately these AI are simple to undermine. The Next Web shares that, “Google’s AI To Detect Toxic Comments Can Be Easily Fooled With ‘Love.’”

According to the article, Google’s perspective AI is easily fooled with typos, more spaces between words, and adding innocuous words to sentences. Google is trying to make the Internet a nicer place:

“The AI project, which was started in 2016 by a Google offshoot called Jigsaw, assigns a toxicity score to a piece of text. Google defines a toxic comment as a rude, disrespectful, or unreasonable comment that is likely to make you leave a discussion. The researchers suggest that even a slight change in the sentence can change the toxicity score dramatically. They saw that changing “You are great” to “You are [obscenity] great”, made the score jump from a totally safe 0.03 to a fairly toxic 0.82.”

The AI is using words with negative meanings to create a toxicity score. The AI’s design is probably very simple, where negative words are assigned a 1 and positive words have a 0. Human speech and emotion is more complicated than what an AI can detect, so sentiment analytics are needed. The only problem is that sentiment analytics are just as easily fooled as Google’s Jigsaw. How can Google improve this? Time, money, and more trial and error.

Whitney Grace, January 24, 2019

Smart Software and Academics

January 19, 2019

In a world overcome with scientific studies that make life-changing claims, only to be debunked, maybe it is time for AI to throw a life preserver. Nowhere is this more obvious than in higher education, where there is virtually no oversight of academic studies, as we discovered in a recent Chronicle of Higher Education story, “Sokal Squared: Is Huge Publishing Hoax.”

According to the story, a group of academics:

…spent 10 months writing 20 hoax papers that illustrate and parody what they call “grievance studies,” and submitted them to “the best journals in the relevant fields.” Of the 20, seven papers were accepted, four were published online, and three were in process when the authors “had to take the project public prematurely and thus stop the study, before it could be properly concluded.”

While some think this prank snuffs out the fallibility of academia, others claim it is destructive to the trust of those outlets. Either way, this seems like an optimal time for big data and AI to step in to help verify things. Schools already use similar software to check whether students are plagiarizing papers.

With non reproducible results and bogus research finding their way into august professional journals, smart software has its work cut out for its zeros and ones.

Patrick Roland, January 19, 2019

Next Page »

  • Archives

  • Recent Posts

  • Meta