Bye-Bye Apple Store Reviews And Ratings

December 17, 2019

Apple makes products which inspire some to loyalty. Apple believes it knows best too.

Some believe the Mac operating system is superior to Windows 10 and Linux in virus protection, performance, and longevity.

Is Apple perfect? Sure, to a point. But the company can trip over its own confidence. One good thing about Apple is that it is known for good customer service, acceptance of negative feedback, and allowing customers to review and rate products on the Apple Store. In an business move inspired by Apple’s changing of its maps in Russia, the Apple Insider states that, “Apple Pulls All Customer Reviews From Online Apple Store.”

On Apple’s online retail stores, all of the user review pages have been removed from the US, Australian, and UK Web sites. Apple has been praised for its transparency and allowing users to post negative reviews on the official Apple store. If Apple makes this a a business practice, it could lose its congenial reputation.

Apple Insider used the Wayback Machine and discovered that the reviews were pulled sometime between the evening of November 16 and morning of November 17. Despite all of Apple’s negative reviews, the company can withstand a little negativity and does not even pay attention to many of them:

“A YouTube video offered as part of the tip was published by the popular photography account, Fstoppers, titled “Apple Fanboys, Where is your God now?” In the video, the host reads a selection of negative reviews of the new 16-inch MacBook Pro with the video published on November 16, coinciding with the removal of the website feature.

However, it remains to be seen if the video had anything to do with Apple’s decision to remove the reviews, given the 56 thousand page views at the time of publication doesn’t seem like a high-enough number for Apple to pay attention to the video’s content. Other videos have been more critical about the company’s products, and some with far higher view counts, but evidently Apple seemingly does not spend that much time involving itself with such public complaints.”

The fact is that Apple makes some $60,000 pro products and if just plain old people have problems, those happy buyers can visit Apple stores and search for a Genius to resolve them.

If Apple cannot fix the problems, a few believers might complain, move on, and then buy the next Apple product. Then the next one and the next and the next… Reviews are not necessary, right?

Whitney Grace, December 17, 2019

China Develops Suicide Detecting AI Bot

December 10, 2019

Most AI bots are used for customer support, massive online postings, downloading stuff, and criminal mischief. China has found another use for AI bots: detecting potential suicides. The South China Morning Post shared the article, “This AI Bot Finds Suicidal Messages On China’s Weibo, Helping Volunteer Psychologists Save Lives.” Asian countries have some of the world’s highest suicide rates. In order to combat death, Huang Zhisheng created the Tree Hole bot in 2018 to detect suicidal messages on Weibo, the Chinese equivalent of Twitter. Tree Hole bot finds potential suicide victims posting on Weibo, then connects them with volunteers to discuss their troubles. Huang has prevented more than one thousand suicides.

In 2016, 136,000 people committed suicide in China, which was 17% of world’s suicides that year. The World Health Organization states that suicide is the second leading cause of death in people ages 15-29. Other companies like Google, Facebook, and Pinterest have used AI to detect potential suicidal or self-harmers, but one of the biggest roadblocks are privacy concerns. Huang notes that saving lives is more important than privacy.

The Tree Hole bot works differently from other companies to find alarming notes:

“The Tree Hole bot automatically scans Weibo every four hours, pulling up posts containing words and phrases like “death”, “release from life”, or “end of the world”. The bot draws on a knowledge graph of suicide notions and concepts, applying semantic analysis programming so it understands that “not want to” and “live” in one sentence may indicate suicidal tendency.

In contrast, Facebook trains its AI suicide prevention algorithm by using millions of real world cases. From April to June, the social media platform handled more than 1.5 million cases of suicide and self-injury content, more than 95 per cent of which were detected before being reported by a user. For the 800,000 examples of such content on Instagram during the same period, 77 per cent were first flagged by the AI system first, according to Facebook, which owns both platforms.”

Assisting potential suicide victims is time consuming and Huang is developing a chatbot that can hopefully take the place of Tree Hole volunteers. Mental health professionals argue that an AI bot cannot take the place of a real human and developers point out there is not enough data to make an effective therapy bot.

Suicide prevention AI bots are terrific, but instead of making them volunteer only would it be possible, at least outside of China to make a non-profit organization staffed by professionals and volunteers?

Whitney Grace, December 10, 2019

Comedian Meets Times of Israel: A Draw?

December 3, 2019

News about Borat – I mean Baron Cohen – has been flowing. A US pundit gushed over a speech by Mr. Cohen, a comedian. The Times of Israel took another approach to the Cohen critique of social media. “It’s not Facebook, Sacha, It’s Humanity” stated:

Baron Cohen is charismatic, well-spoken, and appeals to the most primal emotion of all mankind: fear. This makes him an excellent propagandist. That does not mean he is completely wrong, but it does not make him entirely right, either.

DarkCyber noted this statement in the write up:

The main argument Baron Cohen made in his speech, which is neither original nor new, is that social media platforms do not assume the mantle of preventing the numerous lies and profound hatred that is disseminated through them. Baron Cohen echoed the global criticism of the ease by which one can spread conspiracy theories, invent news headlines, make up figures, and incite against sectors, genders, minorities, and religions.

The write up pointed out:

Human history shows that where information does not flow freely and in times when information is blocked by geographical barriers and can only slowly creep out to the rest of the world — that is when the worst kind of atrocities take place.

Is there a fix? The article suggests:

There are many issues with the way Facebook, Twitter, and Google operate, but very few of them stem from the lack of regulation of the content posted on them. If anything, it would be much more practical and appropriate to review these companies’ conduct as service providers.

The comedian or the journalist? Where does the truth set up a camp site?

Stephen E Arnold, December 3, 2019

Can Machine Learning Pick Out The Bullies?

November 13, 2019

In Walt Disney’s 1942 classic Bambi, Thumper the rabbit was told, “If you can’t say something nice, don’t say nothing at all.”

Poor grammar aside, the thumping rabbit did delivered wise advice to the audience. Then came the Internet and anonymity, when the trolls were released to the world. Internet bullying is one of the world’s top cyber crimes, along with identity and money theft. Passionate anti-bullying campaigners, particularly individuals who were cyber-bullying victims, want social media Web sites to police their users and prevent the abusive crime. Trying to police the Internet is like herding cats. It might be possible with the right type of fish, but cats are not herd animals and scatter once the tasty fish is gone.

Technology might have advanced enough to detect bullying and AI could be the answer. Innovation Toronto wrote, “Machine Learning Algorithms Can Successfully Identify Bullies And Aggressors On Twitter With 90 Percent Accuracy.” AI’s biggest problem is that algorithms can identify and harvest information, they lack the ability to understand emotion and context. Many bullying actions on the Internet are sarcastic or hidden within metaphors.

Computer scientist Jeremy Blackburn and his team from Binghamton University analyzed bullying behavior patterns on Twitter. They discovered useful information to understand the trolls:

“ ‘We built crawlers — programs that collect data from Twitter via variety of mechanisms,’ said Blackburn. ‘We gathered tweets of Twitter users, their profiles, as well as (social) network-related things, like who they follow and who follows them.’ ”

The researchers then performed natural language processing and sentiment analysis on the tweets themselves, as well as a variety of social network analyses on the connections between users. The researchers developed algorithms to automatically classify two specific types of offensive online behavior, i.e., cyber bullying and cyber aggression. The algorithms were able to identify abusive users on Twitter with 90 percent accuracy. These are users who engage in harassing behavior, e.g. those who send death threats or make racist remarks to users.

“‘In a nutshell, the algorithms ‘learn’ how to tell the difference between bullies and typical users by weighing certain features as they are shown more examples,’ said Blackburn.”

Blackburn and his teams’ algorithm only detects the aggressive behavior, it does not do anything to prevent cyber bullying. The victims still see and are harmed by the comments and bullying users, but it does give Twitter a heads up on removing the trolls.

The anti-bullying algorithm prevents bullying only after there are victims. It does little assist the victims, but it does prevent future attacks. What steps need to be taken to prevent bullying altogether? Maybe schools need to teach classes on Internet etiquette with the Common Core, then again if it is not on the test it will not be in a classroom.

Whitney Grace, November 13, 2019

False News: Are Smart Bots the Answer?

November 7, 2019

To us, this comes as no surprise—Axios reports, “Machine Learning Can’t Flag False News, New Studies Show.” Writer Joe Uchill concisely summarizes some recent studies out of MIT that should quell any hope that machine learning will save us from fake news, at least any time soon. Though we have seen that AI can be great at generating readable articles from a few bits of info, mimicking human writers, and even detecting AI-generated stories, that does not mean they can tell the true from the false. These studies were performed by MIT doctoral student Tal Schuster and his team of researchers. Uchill writes:

“Many automated fact-checking systems are trained using a database of true statements called Fact Extraction and Verification (FEVER). In one study, Schuster and team showed that machine learning-taught fact-checking systems struggled to handle negative statements (‘Greg never said his car wasn’t blue’) even when they would know the positive statement was true (‘Greg says his car is blue’). The problem, say the researchers, is that the database is filled with human bias. The people who created FEVER tended to write their false entries as negative statements and their true statements as positive statements — so the computers learned to rate sentences with negative statements as false. That means the systems were solving a much easier problem than detecting fake news. ‘If you create for yourself an easy target, you can win at that target,’ said MIT professor Regina Barzilay. ‘But it still doesn’t bring you any closer to separating fake news from real news.’”

Indeed. Another of Schuster’s studies demonstrates that algorithms can usually detect text written by their kin. We’re reminded, however, that just because an article is machine written does not in itself mean it is false. In fact, he notes, text bots are now being used to adapt legit stories to different audiences or to generate articles from statistics. It looks like we will just have to keep verifying articles with multiple trusted sources before we believe them. Imagine that.

Cynthia Murrell, November 7, 2019

TikTok: True Colors?

October 22, 2019

Since it emerged from China in 2017, the video sharing app TikTok has become very popular. In fact, it became the most downloaded app in October of the following year, after merging with Musical.ly. That deal opened up the U.S. market, in particular, to TikTok. Americans have since been having a blast with the short-form video app, whose stated mission is to “inspire creativity and joy.” The Verge, however, reminds us where this software came from—and how its owners behave—in the article, “It Turns Out There Really Is an American Social Network Censoring Political Speech.”

Reporter Casey Newton grants that US-based social networks have their limits, removing hate speech, violence, and sexual content from their platforms. However, that is a far cry from the types of censorship that are common in China. Newton points to a piece by Alex Hern in The Guardian that details how TikTok has directed its moderators to censor content about Tiananmen Square, Tibetan independence, and the Falun Gong religious group. It is worth mentioning that TikTok’s producer, ByteDance, maintains a separate version of the app (Douyin) for use within China’s borders. This suppression documented in the Guardian story, then, is specifically for the rest of us. Newton writes:

“As Hern notes, suspicions about TikTok’s censorship are on the rise. Earlier this month, as protests raged, the Washington Post reported that a search for #hongkong turned up ‘playful selfies, food photos and singalongs, with barely a hint of unrest in sight.’ In August, an Australian think tank called for regulators to look into the app amid evidence it was quashing videos about Hong Kong protests. On the one hand, it’s no surprise that TikTok is censoring political speech. Censorship is a mandate for any Chinese internet company, and ByteDance has had multiple run-ins with the Communist party already. In one case, Chinese regulators ordered its news app Toutiao to shut down for 24 hours after discovering unspecified ‘inappropriate content.’ In another case, they forced ByteDance to shutter a social app called Neihan Duanzi, which let people share jokes and videos. In the aftermath, the company’s founder apologized profusely — and pledged to hire 4,000 new censors, bringing the total to 10,000.”

For its part, TikTok insists the Guardian-revealed guidelines have been replaced with more “localized approaches,” and that they now consult outside industry leaders in creating new policies. Newton shares a link to TikTok’s publicly posted community guidelines, but notes it contains no mention of political posts. I wonder why that could be.

Cynthia Murrell, October 22, 2019

Understanding Social Engineering

September 6, 2019

“Quiet desperation”? Nope, just surfing on psychological predispositions. Social engineering leads to a number of fascinating security lapses. For a useful analysis of how pushing buttons can trigger some interesting responses, navigate to “Do You Love Me? Psychological Characteristics of Romance Scam Victims.” The write up provides some useful insights. We noted this statement from the article:

a susceptibility to persuasion scale has been developed with the intention to predict likelihood of becoming scammed. This scale includes the following items: premeditation, consistency, sensation seeking, self-control, social influence, similarity, risk preferences, attitudes toward advertising, need for cognition, and uniqueness. The current work, therefore, suggests some merit in considering personal dispositions might predict likelihood of becoming scammed.

Cyberpsychology at work.

Stephen E Arnold, September 6, 2019

Citizen Action within Facebook

September 5, 2019

Pedophiles flock anywhere kids are. Among these places are virtual hangouts, such as Facebook, Instagram, YouTube, Twitter, and more. One thing all criminals can agree on is that they hate pedophiles and in the big house they take justice into their own hands. Outside of prison, Facebook vigilantes take down pedophiles. Quartz reports on how in the article, “There’s A Global Movement Of Facebook Vigilantes Who Hunt Pedophiles.”

The Facebook vigilantes are regular people with families and jobs, who use their spare time to hunt pedophiles grooming children for sexual exploitation. Pedophile hunting became popular in the early 2000s when Chris Hansen hosted the show To Catch a Predator. It is not only popular in the United States, but countries around the world. A big part of the pedophile vigilantism is the public shaming:

“ “Pedophile hunting” or “creep catching” via Facebook is a contemporary version of a phenomenon as old as time: the humiliating act of public punishment. Criminologists even view it as a new expression of the town-square execution. But it’s also clearly a product of its era, a messy amalgam of influences such as reality TV and tabloid culture, all amplified by the internet.”

One might not think there is a problem with embarrassing pedophiles via live stream, but there are unintended consequences. Some of the “victims” commit suicide, vigilantes’ evident might not hold up in court, and they might not have all the facts and context:

“They have little regard for due process or expectations of privacy. The stings, live-streamed to an engaged audience, become a spectacle, a form of entertainment—a twisted consequence of Facebook’s mission to foster online communities.”

Facebook’s community driven algorithms make it easy to follow, support, and join these vigilante groups. The hunters intentions are often cathartic and keen on doling out street justice, but may operate outside the law.

Whitney Grace, September 5, 2019

A Partial Look: Data Discovery Service for Anyone

July 18, 2019

F-Secure has made available a Data Discovery Portal. The idea is that a curious person (not anyone on the DarkCyber team but one of our contractors will be beavering away today) can “find out what information you have given to the tech giants over the years.” Pick a social media service — for example, Apple — and this is what you see:

fsecure

A curious person plugs in the Apple ID information and F-Secure obtains and displays the “data.” If one works through the services for which F-Secure offers this data discovery service, the curious user will have provided some interesting data to F-Secure.

Sound like a good idea? You can try it yourself at this F-Secure link.

F-Secure operates from Finland and was founded in 1988.

Do you trust the Finnish anti virus wizards with your user names and passwords to your social media accounts?

Are the data displayed by F-Secure comprehensive? Filtered? Accurate?

Stephen E Arnold, July 18, 2019

Google Takes Another Run at Social Media

July 12, 2019

The Google wants to be a winner in social media. “Google Is Testing a New Social Network for Offline Meetups” describes the Shoelace social network. Shoelaces keep footwear together. The metaphor is … interesting.

The write up states:

The aim behind coming up with this innovative social networking app is to let people find like-minded people around with whom they can meet and share things between each other. The interests could be related to social activities, hobbies, events etc.

The idea of finding people seems innocuous enough. But what if one or more bad actors use the new Google social network in unanticipated ways?

The write up reports:

It will focus more on providing a platform to meet and expand businesses and building communities with real people.

The Google social play has “loops.” What’s a loop? DarkCyber learned:

This is a new name for Events. You can make use of this feature to create an event where people can see your listings and try to join the event as per their interests.

What an innovative idea? No other service — including Meetup.com, Facebook, and similar plays — have this capability.

Like YouTube’s “new” monetization methods which seem similar to Twitch.tv’s, Google is innovating again.

Mobile. Find people. Meet up.

Maybe Google’s rich, bold, proud experiences with Orkut, Google Buzz, and Google+ were useful? Effort does spark true innovation … maybe.

Stephen E Arnold, July 12, 2019

Next Page »

  • Archives

  • Recent Posts

  • Meta