Smart or Not So Smart Software?

March 22, 2019

I read “A Further Update on New Zealand Terrorist Attack.” The good news is that the Facebook article did not include the word “sorry” or the phrase “we’ll do better.” The bad news is that the article includes this statement:

AI systems are based on “training data”, which means you need many thousands of examples of content in order to train a system that can detect certain types of text, imagery or video. This approach has worked very well for areas such as nudity, terrorist propaganda and also graphic violence where there is a large number of examples we can use to train our systems. However, this particular video did not trigger our automatic detection systems. To achieve that we will need to provide our systems with large volumes of data of this specific kind of content, something which is difficult as these events are thankfully rare. Another challenge is to automatically discern this content from visually similar, innocuous content – for example if thousands of videos from live-streamed video games are flagged by our systems, our reviewers could miss the important real-world videos where we could alert first responders to get help on the ground.

Violent videos have never before been posted to Facebook? Hmmm.

Smart software, smart employees, smart PR. Sort of. The fix is to process more violent videos. Sounds smart.

Stephen E Arnold, March 22, 2019

Comments

Got something to say?





  • Archives

  • Recent Posts

  • Meta