Can Machine Learning Pick Out The Bullies?

November 13, 2019

In Walt Disney’s 1942 classic Bambi, Thumper the rabbit was told, “If you can’t say something nice, don’t say nothing at all.”

Poor grammar aside, the thumping rabbit did delivered wise advice to the audience. Then came the Internet and anonymity, when the trolls were released to the world. Internet bullying is one of the world’s top cyber crimes, along with identity and money theft. Passionate anti-bullying campaigners, particularly individuals who were cyber-bullying victims, want social media Web sites to police their users and prevent the abusive crime. Trying to police the Internet is like herding cats. It might be possible with the right type of fish, but cats are not herd animals and scatter once the tasty fish is gone.

Technology might have advanced enough to detect bullying and AI could be the answer. Innovation Toronto wrote, “Machine Learning Algorithms Can Successfully Identify Bullies And Aggressors On Twitter With 90 Percent Accuracy.” AI’s biggest problem is that algorithms can identify and harvest information, they lack the ability to understand emotion and context. Many bullying actions on the Internet are sarcastic or hidden within metaphors.

Computer scientist Jeremy Blackburn and his team from Binghamton University analyzed bullying behavior patterns on Twitter. They discovered useful information to understand the trolls:

“ ‘We built crawlers — programs that collect data from Twitter via variety of mechanisms,’ said Blackburn. ‘We gathered tweets of Twitter users, their profiles, as well as (social) network-related things, like who they follow and who follows them.’ ”

The researchers then performed natural language processing and sentiment analysis on the tweets themselves, as well as a variety of social network analyses on the connections between users. The researchers developed algorithms to automatically classify two specific types of offensive online behavior, i.e., cyber bullying and cyber aggression. The algorithms were able to identify abusive users on Twitter with 90 percent accuracy. These are users who engage in harassing behavior, e.g. those who send death threats or make racist remarks to users.

“‘In a nutshell, the algorithms ‘learn’ how to tell the difference between bullies and typical users by weighing certain features as they are shown more examples,’ said Blackburn.”

Blackburn and his teams’ algorithm only detects the aggressive behavior, it does not do anything to prevent cyber bullying. The victims still see and are harmed by the comments and bullying users, but it does give Twitter a heads up on removing the trolls.

The anti-bullying algorithm prevents bullying only after there are victims. It does little assist the victims, but it does prevent future attacks. What steps need to be taken to prevent bullying altogether? Maybe schools need to teach classes on Internet etiquette with the Common Core, then again if it is not on the test it will not be in a classroom.

Whitney Grace, November 13, 2019

False News: Are Smart Bots the Answer?

November 7, 2019

To us, this comes as no surprise—Axios reports, “Machine Learning Can’t Flag False News, New Studies Show.” Writer Joe Uchill concisely summarizes some recent studies out of MIT that should quell any hope that machine learning will save us from fake news, at least any time soon. Though we have seen that AI can be great at generating readable articles from a few bits of info, mimicking human writers, and even detecting AI-generated stories, that does not mean they can tell the true from the false. These studies were performed by MIT doctoral student Tal Schuster and his team of researchers. Uchill writes:

“Many automated fact-checking systems are trained using a database of true statements called Fact Extraction and Verification (FEVER). In one study, Schuster and team showed that machine learning-taught fact-checking systems struggled to handle negative statements (‘Greg never said his car wasn’t blue’) even when they would know the positive statement was true (‘Greg says his car is blue’). The problem, say the researchers, is that the database is filled with human bias. The people who created FEVER tended to write their false entries as negative statements and their true statements as positive statements — so the computers learned to rate sentences with negative statements as false. That means the systems were solving a much easier problem than detecting fake news. ‘If you create for yourself an easy target, you can win at that target,’ said MIT professor Regina Barzilay. ‘But it still doesn’t bring you any closer to separating fake news from real news.’”

Indeed. Another of Schuster’s studies demonstrates that algorithms can usually detect text written by their kin. We’re reminded, however, that just because an article is machine written does not in itself mean it is false. In fact, he notes, text bots are now being used to adapt legit stories to different audiences or to generate articles from statistics. It looks like we will just have to keep verifying articles with multiple trusted sources before we believe them. Imagine that.

Cynthia Murrell, November 7, 2019

TikTok: True Colors?

October 22, 2019

Since it emerged from China in 2017, the video sharing app TikTok has become very popular. In fact, it became the most downloaded app in October of the following year, after merging with Musical.ly. That deal opened up the U.S. market, in particular, to TikTok. Americans have since been having a blast with the short-form video app, whose stated mission is to “inspire creativity and joy.” The Verge, however, reminds us where this software came from—and how its owners behave—in the article, “It Turns Out There Really Is an American Social Network Censoring Political Speech.”

Reporter Casey Newton grants that US-based social networks have their limits, removing hate speech, violence, and sexual content from their platforms. However, that is a far cry from the types of censorship that are common in China. Newton points to a piece by Alex Hern in The Guardian that details how TikTok has directed its moderators to censor content about Tiananmen Square, Tibetan independence, and the Falun Gong religious group. It is worth mentioning that TikTok’s producer, ByteDance, maintains a separate version of the app (Douyin) for use within China’s borders. This suppression documented in the Guardian story, then, is specifically for the rest of us. Newton writes:

“As Hern notes, suspicions about TikTok’s censorship are on the rise. Earlier this month, as protests raged, the Washington Post reported that a search for #hongkong turned up ‘playful selfies, food photos and singalongs, with barely a hint of unrest in sight.’ In August, an Australian think tank called for regulators to look into the app amid evidence it was quashing videos about Hong Kong protests. On the one hand, it’s no surprise that TikTok is censoring political speech. Censorship is a mandate for any Chinese internet company, and ByteDance has had multiple run-ins with the Communist party already. In one case, Chinese regulators ordered its news app Toutiao to shut down for 24 hours after discovering unspecified ‘inappropriate content.’ In another case, they forced ByteDance to shutter a social app called Neihan Duanzi, which let people share jokes and videos. In the aftermath, the company’s founder apologized profusely — and pledged to hire 4,000 new censors, bringing the total to 10,000.”

For its part, TikTok insists the Guardian-revealed guidelines have been replaced with more “localized approaches,” and that they now consult outside industry leaders in creating new policies. Newton shares a link to TikTok’s publicly posted community guidelines, but notes it contains no mention of political posts. I wonder why that could be.

Cynthia Murrell, October 22, 2019

Understanding Social Engineering

September 6, 2019

“Quiet desperation”? Nope, just surfing on psychological predispositions. Social engineering leads to a number of fascinating security lapses. For a useful analysis of how pushing buttons can trigger some interesting responses, navigate to “Do You Love Me? Psychological Characteristics of Romance Scam Victims.” The write up provides some useful insights. We noted this statement from the article:

a susceptibility to persuasion scale has been developed with the intention to predict likelihood of becoming scammed. This scale includes the following items: premeditation, consistency, sensation seeking, self-control, social influence, similarity, risk preferences, attitudes toward advertising, need for cognition, and uniqueness. The current work, therefore, suggests some merit in considering personal dispositions might predict likelihood of becoming scammed.

Cyberpsychology at work.

Stephen E Arnold, September 6, 2019

Citizen Action within Facebook

September 5, 2019

Pedophiles flock anywhere kids are. Among these places are virtual hangouts, such as Facebook, Instagram, YouTube, Twitter, and more. One thing all criminals can agree on is that they hate pedophiles and in the big house they take justice into their own hands. Outside of prison, Facebook vigilantes take down pedophiles. Quartz reports on how in the article, “There’s A Global Movement Of Facebook Vigilantes Who Hunt Pedophiles.”

The Facebook vigilantes are regular people with families and jobs, who use their spare time to hunt pedophiles grooming children for sexual exploitation. Pedophile hunting became popular in the early 2000s when Chris Hansen hosted the show To Catch a Predator. It is not only popular in the United States, but countries around the world. A big part of the pedophile vigilantism is the public shaming:

“ “Pedophile hunting” or “creep catching” via Facebook is a contemporary version of a phenomenon as old as time: the humiliating act of public punishment. Criminologists even view it as a new expression of the town-square execution. But it’s also clearly a product of its era, a messy amalgam of influences such as reality TV and tabloid culture, all amplified by the internet.”

One might not think there is a problem with embarrassing pedophiles via live stream, but there are unintended consequences. Some of the “victims” commit suicide, vigilantes’ evident might not hold up in court, and they might not have all the facts and context:

“They have little regard for due process or expectations of privacy. The stings, live-streamed to an engaged audience, become a spectacle, a form of entertainment—a twisted consequence of Facebook’s mission to foster online communities.”

Facebook’s community driven algorithms make it easy to follow, support, and join these vigilante groups. The hunters intentions are often cathartic and keen on doling out street justice, but may operate outside the law.

Whitney Grace, September 5, 2019

A Partial Look: Data Discovery Service for Anyone

July 18, 2019

F-Secure has made available a Data Discovery Portal. The idea is that a curious person (not anyone on the DarkCyber team but one of our contractors will be beavering away today) can “find out what information you have given to the tech giants over the years.” Pick a social media service — for example, Apple — and this is what you see:

fsecure

A curious person plugs in the Apple ID information and F-Secure obtains and displays the “data.” If one works through the services for which F-Secure offers this data discovery service, the curious user will have provided some interesting data to F-Secure.

Sound like a good idea? You can try it yourself at this F-Secure link.

F-Secure operates from Finland and was founded in 1988.

Do you trust the Finnish anti virus wizards with your user names and passwords to your social media accounts?

Are the data displayed by F-Secure comprehensive? Filtered? Accurate?

Stephen E Arnold, July 18, 2019

Google Takes Another Run at Social Media

July 12, 2019

The Google wants to be a winner in social media. “Google Is Testing a New Social Network for Offline Meetups” describes the Shoelace social network. Shoelaces keep footwear together. The metaphor is … interesting.

The write up states:

The aim behind coming up with this innovative social networking app is to let people find like-minded people around with whom they can meet and share things between each other. The interests could be related to social activities, hobbies, events etc.

The idea of finding people seems innocuous enough. But what if one or more bad actors use the new Google social network in unanticipated ways?

The write up reports:

It will focus more on providing a platform to meet and expand businesses and building communities with real people.

The Google social play has “loops.” What’s a loop? DarkCyber learned:

This is a new name for Events. You can make use of this feature to create an event where people can see your listings and try to join the event as per their interests.

What an innovative idea? No other service — including Meetup.com, Facebook, and similar plays — have this capability.

Like YouTube’s “new” monetization methods which seem similar to Twitch.tv’s, Google is innovating again.

Mobile. Find people. Meet up.

Maybe Google’s rich, bold, proud experiences with Orkut, Google Buzz, and Google+ were useful? Effort does spark true innovation … maybe.

Stephen E Arnold, July 12, 2019

Twitter Tools

June 10, 2019

One of our readers spotted “5 Twitter Tools to Discover the Best and Funniest Tweets.” The article is a round up of software utilities which will provide a selected stream of information from Twitter “content creators.” Keep in mind that threads have been rendered almost useless by Twitter’s editorial procedures. Nevertheless, if you don’t have access to a system which provides the “firehose” content or a repository of indexed and parsed Twitter content, you may find one of these useful:

  • Funny Tweeter
  • Ketchup (an easy way to provide Google with information about Tweets)
  • Really Good Questions
  • Thread Reader (what about those disappeared tweets and the not available tweets
  • Twitter’s digest
  • Twubbler (not exactly a Palantir Gotham timeline, however)

Consult the source article for explanations of each and the links.

Stephen E Arnold, June 10, 2019

Reflecting about New Zealand

June 5, 2019

Following the recent attacks in two New Zealand mosques, during which a suspected terrorist successfully live-streamed horrific video of their onslaught for over a quarter-hour, many are asking why the AI tasked with keeping such content off social media failed us. As it turns out, context is key. CNN explains “Why AI Is Still Terrible at Spotting Violence Online.” Reporter Rachel Metz writes:

“A big reason is that whether it’s hateful written posts, pornography, or violent images or videos, artificial intelligence still isn’t great at spotting objectionable content online. That’s largely because, while humans are great at figuring out the context surrounding a status update or YouTube, context is a tricky thing for AI to grasp.”

Sites currently try to account for that shortfall with a combination of AI and human moderators, but they have trouble keeping up with the enormous influx of postings. For example, we’re told YouTube users alone upload more than 400 hours of video per minute. Without enough people to provide context, AI is simply at a loss. Metz notes:

“AI is not good at understanding things such as who’s writing or uploading an image, or what might be important in the surrounding social or cultural environment. … Comments may superficially sound very violent but actually be satire in protest of violence. Or they may sound benign but be identifiable as dangerous to someone with knowledge about recent news or the local culture in which they were created.

We also noted:

“… Even if violence appears to be shown in a video, it isn’t always so straightforward that a human — let alone a trained machine — can spot it or decide what best to do with it. A weapon might not be visible in a video or photo, or what appears to be violence could actually be a simulation.”

On top of that, factors that may not be apparent to human viewers, like lighting, background images, or even frames per seconds, complicate matters for AI. It appears it will be some time before we can rely on algorithms to shield social media from abhorrent content. Can platforms come up with some effective alternative in the meantime? The pressure is on.

Cynthia Murrell, June 5, 2019

US Government Social Media Archive

May 28, 2019

Library of Congress, hello, LOC, are you there? What about other US government agencies? Do you have these data?

Maybe not?

I read “U.S. Navy Creating a 350 Billion Record Social Media Archive” and there is not one word about the Library of Congress. The US Navy wants to build a social media collection. Based on the sketchy information available, the content scope will include:

  • Messages from 200 million unique users (about 30 percent of social media users)
  • Time window: July 1, 2014, to December 31, 2016
  • 100 languages
  • Metadata (date, time, location, etc.).

The RFP is located on FedBizOps.

Stephen E Arnold, May 28, 2019

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta