Where Can AI Get Its Smarts? How about Prison?

April 15, 2019

The DarkCyber team did not fabricate the story “Inmates Are Training AI As Part Prison Labor Tasks.” I noted this passage:

An unusual type of prison labor has been introduced for inmates in Finland’s jails: training artificial intelligence. This is based around categorizing data which is used to train artificial intelligence algorithms for a startup company.

Interesting.

Stephen E Arnold, April 15, 2019

Facial Recogntion: An Important Technology Enters Choppy Waters

April 8, 2019

I wouldn’t hold my breath: The Electronic Frontier Foundation (EFF) declares, “Governments Must Face the Facts About Face Surveillance, and Stop Using It.” Writers Hayley Tsukayama and Adam Schwartz begin by acknowledging reality—the face surveillance technology business is booming, with the nation’s law enforcement agencies increasingly adopting it. They write:

EFF supports legislative efforts in Washington and Massachusetts to place a moratorium on government use of face surveillance technology. These bills also would ban a particularly pernicious kind of face surveillance: applying it to footage taken from police body-worn cameras. The moratoriums would stay in place, unless lawmakers determined these technologies do not have a racial disparate impact, after hearing directly from minority communities about the unfair impact face surveillance has on vulnerable people. We recently sent a letter to Washington legislators in support of that state’s moratorium bill.

EFF’s communications may be having some impact.

DarkCyber noted that Amazon will be allowing shareholders a vote about sales of the online bookstore’s facial recognition technology, Rekognition. “AI Researchers Tell Amazon to Stop Selling Facial Recognition to the Police” does not explain how Amazon can remove its FAR from those entities which have licensed the technology.

DarkCyber believes that the US is poised to become a procurement innovation center. Companies and their potential customers have to figure out how to work together without creating political, legal, and financial disruptions.

A failure to resolve what seems to be a more common problem may allow vendors in other countries to capture leading engineers, major contracts, and a lead in an important technology.

Stephen E Arnold, April 8, 2019

Intelligence Community Braces for AI-Generated Fake People

April 4, 2019

AI trouble comes in all shapes and sizes. Chatbots have been around so long, they seem to have become a generic term. More recently, the news that fake videos can be produced of famous people saying, well, anything have surfaced. But a new, more subtle threat looms in the act of making realistic photos of non-existent people. We learned more about this odd threat in the Mashable article, “This Website Uses AI to Generate Faces of People Who Don’t Really Exist.”

According to the story:

“As for the societal impact of this technology, Wang said the more people are aware of it, the better they can be prepared for these images…A powerful enough GAN [the face making technology] could be used to create an image of a loved one, which could be used for manipulation, he said. Or a big enough dataset could be used to create all sorts of realistic images, from scratch.”

We assumed that Google’s external AI board might have provided some insight, ideas, and inspiration with regard to digital manipulations. It seems, however, that the AI board has become mired in in-fighting. Real, not fake.

That’s real, which throws a bit of water on the idea of figuring out what’s false.

Patrick Roland, April 4, 2019

Artificial Intelligence Over Hyped. Believe It or Not

April 1, 2019

Again, not an April Fool’s spoof. “Concerns Over AI‘s Ability to Create Fake News Are a Little Overhyped, Says Salesforce Chief Scientist.” The write up includes this statement:

“I think it’s a little overhyped,” Richard Socher, chief scientist at software firm Salesforce, told CNBC in an interview during his trip to Singapore. That’s because humans are already adept at creating fake news without the help of algorithms, Socher said. Furthermore, fake news usually has some sort of “agenda” behind it — something that AI inherently lacks, he added.

DarkCyber like the “little overhyped” phrase. Why would a technology company engage in over statement? Oh, money. Right.

Stephen E Arnold, April 1, 2019

Smart Software Has a Possible Blind Spot

March 29, 2019

Following the recent attacks in two New Zealand mosques, during which a suspected terrorist successfully live-streamed horrific video of their onslaught for over a quarter-hour, many are asking why the AI tasked with keeping such content off social media failed us. As it turns out, context is key. CNN explains “Why AI Is Still Terrible at Spotting Violence Online.” Reporter Rachel Metz writes:

“A big reason is that whether it’s hateful written posts, pornography, or violent images or videos, artificial intelligence still isn’t great at spotting objectionable content online. That’s largely because, while humans are great at figuring out the context surrounding a status update or YouTube, context is a tricky thing for AI to grasp.”

Sites currently try to account for that shortfall with a combination of AI and human moderators, but they have trouble keeping up with the enormous influx of postings. For example, we’re told YouTube users alone upload more than 400 hours of video per minute. Without enough people to provide context, AI is simply at a loss. Metz notes:

“AI is not good at understanding things such as who’s writing or uploading an image, or what might be important in the surrounding social or cultural environment. … Comments may superficially sound very violent but actually be satire in protest of violence. Or they may sound benign but be identifiable as dangerous to someone with knowledge about recent news or the local culture in which they were created.

I also circled this statement:

“… Even if violence appears to be shown in a video, it isn’t always so straightforward that a human — let alone a trained machine — can spot it or decide what best to do with it. A weapon might not be visible in a video or photo, or what appears to be violence could actually be a simulation.”

On top of that, factors that may not be apparent to human viewers, like lighting, background images, or even frames per seconds, complicate matters for AI. It appears it will be some time before we can rely on algorithms to shield social media from abhorrent content. Can platforms come up with some effective alternative in the meantime? Sure, as long as the performance is in the 50 to 75 percent accuracy range.

Cynthia Murrell, March 29, 2019

Silicon Valley: The New Center of Ethical Thought

March 28, 2019

I read “Ethical Question Takes Center Stage at Silicon Valley Summit on Artificial Intelligence.” The write up is a collection of statements made by people attending the conference. A couple of the statements were fascinating; for instance, here’s one allegedly offered by a Google senior vice president:

Google’s Walker [a senior VP of global affairs] said the company has some 300 people working to address issues such as racial bias in algorithms but the company has a long way to go.

I wonder if each attendee received a copy of The Age of Surveillance Capitalism? TLDNR probably.

Stephen E Arnold, March 28, 2019

Google: Getting AI Advice from Humans, Not AI

March 27, 2019

If your blood pressure is high when thinking about machine learning, you are not alone. If you believe the headlines, we are all (no matter the industry) on the cusp of being replaced by AI and machine learning. However, there is hope for meager humans like us, as we discovered in a recent Forbes article, “A Reminder That Machine Learning Is About Correlations Not Causation.”

According to the story:

“Developers and data scientists increasingly treat their creations as silicon life forms “learning” concrete facts about the world, rather than what they truly are: piles of numbers detached from what they represent, mere statistical patterns encoded into software. We must recognize that those patterns are merely correlations amongst vast reams of data, rather than causative truths or natural laws governing our world.”

Worry not, please.

Google has launched a global artificial intelligence council. The council will advise AI companies about artificial intelligence and ethics, according to Reuters. We  noted:

The council, which is slated to publish a report at the end of 2019, includes technology experts, digital ethicists, and people with public policy backgrounds, Kent Walker, Google’s senior vice president for global affairs, said at a Massachusetts Institute of Technology conference.

Will Google remember or selectively forget to listen to the inputs from the council? Yes, the council includes a drone expert. No, the council does not include a screenwriter who worked on Terminator.

Stephen E Arnold, March 27, 2019

Smart or Not So Smart Software?

March 22, 2019

I read “A Further Update on New Zealand Terrorist Attack.” The good news is that the Facebook article did not include the word “sorry” or the phrase “we’ll do better.” The bad news is that the article includes this statement:

AI systems are based on “training data”, which means you need many thousands of examples of content in order to train a system that can detect certain types of text, imagery or video. This approach has worked very well for areas such as nudity, terrorist propaganda and also graphic violence where there is a large number of examples we can use to train our systems. However, this particular video did not trigger our automatic detection systems. To achieve that we will need to provide our systems with large volumes of data of this specific kind of content, something which is difficult as these events are thankfully rare. Another challenge is to automatically discern this content from visually similar, innocuous content – for example if thousands of videos from live-streamed video games are flagged by our systems, our reviewers could miss the important real-world videos where we could alert first responders to get help on the ground.

Violent videos have never before been posted to Facebook? Hmmm.

Smart software, smart employees, smart PR. Sort of. The fix is to process more violent videos. Sounds smart.

Stephen E Arnold, March 22, 2019

Smart Software: Confused or Over Hyped?

March 21, 2019

I read “Lost in Translation: Osaka Subway Removes Website after Automated Program Produces Garbled Phrases.” The main point of the article is that smart software generated nonsense phrases. Here’s one example of an ad for a new TV program:

“Osaka Metro TV uploaded footage of city is the new born. Please visit the can’t usually see pretty CM filming behind the scenes!”

I have a different view. When I was in Japan, I noted slogans on pull overs and T shirts like this one:

image

My thought is that the wonky translation systems was trained on T short slogans crafted in Japan.

Stephen E Arnold, March 21, 2019

Intelligence Professionals: Data Caution Required?

March 21, 2019

Perhaps we rely too much on AI and machine learning in our respective industries. This is a lesson the intelligence world of NSA, CIA, and the like are beginning to understand. While the insights computer programs provide are illuminating, they are also subject to inaccuracies, too. We learned more from a recent Sapiens story, “The Science of Human Nature Has a Serious Problem.”

According to the story:

“But a growing body of research has raised concerns that many of these discoveries suffer from severe biases of their own. Specifically, the vast majority of what we know about human psychology and behavior comes from studies conducted with a narrow slice of humanity—college students, middle-class respondents living near universities, and highly educated residents of wealthy, industrialized, and democratic nations.”

The intelligence community and other investigative groups rely on smart software. The outputs from some systems may generate signals which can be off the mark. How will vendors respond?

Marketing may have to ride to the rescue.

Patrick Roland, March 21, 2019

Next Page »

  • Archives

  • Recent Posts

  • Meta