Reflecting about New Zealand

June 5, 2019

Following the recent attacks in two New Zealand mosques, during which a suspected terrorist successfully live-streamed horrific video of their onslaught for over a quarter-hour, many are asking why the AI tasked with keeping such content off social media failed us. As it turns out, context is key. CNN explains “Why AI Is Still Terrible at Spotting Violence Online.” Reporter Rachel Metz writes:

“A big reason is that whether it’s hateful written posts, pornography, or violent images or videos, artificial intelligence still isn’t great at spotting objectionable content online. That’s largely because, while humans are great at figuring out the context surrounding a status update or YouTube, context is a tricky thing for AI to grasp.”

Sites currently try to account for that shortfall with a combination of AI and human moderators, but they have trouble keeping up with the enormous influx of postings. For example, we’re told YouTube users alone upload more than 400 hours of video per minute. Without enough people to provide context, AI is simply at a loss. Metz notes:

“AI is not good at understanding things such as who’s writing or uploading an image, or what might be important in the surrounding social or cultural environment. … Comments may superficially sound very violent but actually be satire in protest of violence. Or they may sound benign but be identifiable as dangerous to someone with knowledge about recent news or the local culture in which they were created.

We also noted:

“… Even if violence appears to be shown in a video, it isn’t always so straightforward that a human — let alone a trained machine — can spot it or decide what best to do with it. A weapon might not be visible in a video or photo, or what appears to be violence could actually be a simulation.”

On top of that, factors that may not be apparent to human viewers, like lighting, background images, or even frames per seconds, complicate matters for AI. It appears it will be some time before we can rely on algorithms to shield social media from abhorrent content. Can platforms come up with some effective alternative in the meantime? The pressure is on.

Cynthia Murrell, June 5, 2019

Comments

Comments are closed.

  • Archives

  • Recent Posts

  • Meta