Jack Benny Tropes Return: Tweets Are Making the Oooooold New Again
June 5, 2020
The Jack Benny Radio Show. A character tagged Frank Nelson, who says, “Yeeeeeesssss.” Funny, yep. When? A half century ago. So what?
Even with the breath of emojis, GIFs, videos, and other accoutrement it is hard to express emotional intent through text. Wired investigated the how and why emotions are expressed in the article, “Whoooaaa Duuuuude: Why We Stretch Words In Tweets And Texts.”
The University of Vermont researched Twitter tweets about why elongated words are used so much on the social media platform. They discovered that stretching a word is a linguistic device conveying a varied emotional range from excitement to sarcasm. Exclamation points are the old dead tree way to express anything from excitement to fear, but apparently they are old fashioned and it shows restraint not to use one. People turn to stretched words to add more meaning to their tweets.
The University of Vermont examined 10% of tweets sent between 2008-2016 for elongated words. Their research yielded interesting patterns, but the most obvious is how complex human emotion is for AI:
“Because stretched words can be embedded with so much extra meaning beyond the words themselves, understanding them is critical for artificial intelligences that analyze text, like chatbots. At the moment, a stretched word may be so perplexing for an AI that the program just skips over it entirely. We don’t want to have to bold or italicize words to emphasize them for the chatbot to parse—and even then, such formatting can’t replicate the range of emotions that stretched words convey.”
Studies like this help AI and machine learning understand the subtle nuances involved in human language. It will be decades before machines are entirely capable of understanding human language patterns, but they more data they have the closer they come.
Oh, Rochester, yessssss bossssss.
Whitney Grace, June 6, 2020
Grammar? You Must Be Joking!
June 5, 2020
Perhaps the set of rules many of us worked so hard to master have become but a quaint convention. Write to Edit discusses the question, “Does Grammar Even Matter Anymore?” Writing practices are changing so fast, it is a natural question to ponder. However, states writer Amelia Zimmerman, that very question misses the point. It is the old prescriptivism vs. descriptivism issue—is grammar a set of fixed rules to be adhered to or an evolving account of how language is used? Zimmerman writes:
“Neither side is entirely wrong. Although correct grammar is important for clarity and often determines your reputation on the page, language is an evolving thing, not a static rulebook. Things people said in Shakespeare’s day would hardly be said now; even the spelling and meaning of words changes over time (literally doesn’t mean literally anymore). Now, the internet, text messages and emojis are changing the English language faster than ever. But this divide focuses on the small-picture topic of grammar without addressing the big-picture idea which is meaning. Grammar is a tool that, when used correctly, creates clarity and delivers meaning. But that’s all it is — a tool. Whether grammar matters is the wrong question. The right question is whether meaning matters — whether clarity matters — and that answer will never change.”
Of course, the answer there is yes; clarity is the cardinal quality of any good editor. The article goes on to examine what grammar rules really are (most are more like guidelines, really) and when one might choose to break them. Sometimes breaking a convention makes the meaning clearer, other times doing so makes a sentence more appealing, persuasive, or succinct. Zimmerman concludes:
“Most grammar guidelines have been constructed and are adhered to in such a way that they do help transmit your meaning clearly. … But sometimes adhering too strictly to old notions of grammar can get in the way of comprehension, make your writing too long-winded or ridiculous, or restrict creative expression and poetic effect. That’s when a mix of common sense and your own gut should prevail.”
This descriptivist heartily concurs. Remember. The number is plural. A number is singular. None is a singular, so none is agreeing. Bummer.
Cynthia Murrell, June 6, 2020
GeoSpark Analytics: Real Time Analytics
April 6, 2020
In late 2017, OGSystems chopped out some of the firm’s analytics capabilities. The new company was Geospark Analytics. The service provided enabled customers like the US Department of Defense and FEMA to obtain information about important new events. “Events” is jargon for an alert plus data about something that is important.
“FEMA Contractor Tracing Coronavirus Deaths Uses Web Scraping, Social Media Monitoring” explains one use of the system. The write up says:
Geospark Analytics combines machine learning and big data to analyze events in real-time and warn of potential disruptions to the businesses of high-dollar private and public clientele…
Like Bluedot in Canada, Geospark was one of the monitoring companies analyzing open source and some specialized data to find interesting events. The write up continues:
Geospark Analytics’ product, called Hyperion, the namesake of the Titan son of Uranus (meaning, “watcher from above”), fingered Wuhan as a “hotspot,” in the company’s parlance, within hours after news of the virus first broke. “Hotspots tracks normal patterns of activity across the globe and provides a visual cue to flag disruptive events that could impact your employees, operations, and investments and result in billions of dollars in economic losses,” the company’s website says.
Engadget points out that there are a couple of companies with the name “Geospark.” DarkCyber finds this interesting. This statement provides more color about the Geospark approach:
Geospark Analytics claims to have processed “6.8 million” sources of information; everything from tweets to economic reports. “We geo-position it, we use natural language processing, and we have deep learning models that categorize the data into event and health models,” Goolgasian [Geospark’s CEO] said. It’s through these many millions of data points that the company creates what it calls a “baseline level of activity” for specific regions, such as Wuhan. A spike of activity around any number of security-, military-, or health-related topics and the system flags it as a potential disruption.
How does Geospark avoid the social media noise, bias, and disinformation that finds its way into open source content? The article states:
“We rely more on traditional data sources and we don’t do anything that isn’t publicly available,” Goolgasian said, echoing a common refrain among data firms that fuel surveillance products by mining the internet itself.
Providing specialized services to government agencies is not much of a surprise in DarkCyber’s opinion. Financial firms can also be avid consumers of real-time data. The idea is to get the jump on the competition which probably has its own source of digital insights.
Other observations:
- The apparent “surprise” threading through the Engadget article is a bit off putting. DarkCyber is aware of a number of social media and specialized content monitoring services. In fact, there is a surplus of these operations and not all will survive in the present business climate.
- Detecting and alerting are helpful but the messengers failed to achieve impact. How does DarkCyber know? Well, there is the lockdown.
- Publicizing what companies like Geospark and others do to generate income can have interesting consequences.
Net net: Some types of specialized services are difficult to explain in a way that reduces blowback. Some of the blowback have significant impact on social media analytics companies. The Geofeedia case is a reminder. I know. I know. “What’s a Geofeedia some may ask?”
Good question and DarkCyber thinks few know the answer. Plucking insights from information many people believe to be privileged can be fraught with business shock waves.
Stephen E Arnold, April 6, 2020
Cambridge Analytica Alum: Social Media Is Like Bad, You Know
April 4, 2020
A voice of (in)experience describes how tech companies can be dangerous when left unchecked. Channel News Asia reports, “Tech Must Be Regulated Like Tobacco, says Cambridge Analytica Whistleblower.” Christopher Wylie is the data scientist who exposed Cambridge Analytica’s use of Facebook data to manipulate the 2016 presidential election, among others. He declares society has yet to learn the lesson of that scandal. Yes, Facebook was fined a substantial sum, but it and other tech giants continue to operate with little to no oversight. The article states:
“Wylie details in his book how personality profiles mined from Facebook were weaponised to ‘radicalise’ individuals through psychographic profiling and targeting techniques. So great is their potential power over society and people’s lives that tech professionals need to be subject to the same codes of ethics as doctors and lawyers, he told AFP as his book was published in France. ‘Profiling work that we were doing to look at who was most vulnerable to being radicalised … was used to identify people in the US who were susceptible to radicalisation so that they could be encouraged and catalysed on that path,’ he said. ‘You are being intentionally monitored so that your unique biases, your anxieties, your weaknesses, your needs, your desires can be quantified in such a way that a company can seek to exploit that for profit,’ said the 30-year-old. Wylie, who blew the whistle to British newspaper, The Guardian, in Mar 2018, said at least people now realise how powerful data can be.”
As in any industry, tech companies are made up of humans, some of whom are willing to put money over morality. And as in other consequential industries like construction, engineering, medicine, and law, Wylie argues, regulations are required to protect consumers from that which they do not understand.
Cynthia Murrell, April 4, 2020
Biased? You Betcha
March 11, 2020
Fact checkers probably have one of the hardest jobs, especially with today’s 24/7 access news stream. Determining what the facts are is difficult and requires proper research. Fact checkers, however, have a tougher nut to crack with confirmation bias a.k.a. this article from Nieman Lab: “The Fact-Checker’s Dilemma: Humans Are Hardwired To Dismiss Facts That Don’t Fit Their Worldview.”
The article opens with a poignant statement about polarized, insulated ideological communities ratified by their own beliefs. Some examples of these communities are autism is caused by vaccines, global warming is a hoax, and different political mish mash.
Refuting false information should be simple, especially with cold, hard facts, but that is not the case. Political, religion, ethnicity, nationality, and other factors influence how and what people believe. What is the cause behind this behavior?
“The interdisciplinary study of this phenomenon has exploded over just the past six or seven years. One thing has become clear: The failure of various groups to acknowledge the truth about, say, climate change, isn’t explained by a lack of information about the scientific consensus on the subject. Instead, what strongly predicts denial of expertise on many controversial topics is simply one’s political persuasion.”
What is astonishing is this:
“A 2015 metastudy showed that ideological polarization over the reality of climate change actually increases with respondents’ knowledge of politics, science, and/or energy policy. The chances that a conservative is a climate change denier is significantly higher if he or she is college-educated. Conservatives scoring highest on tests for cognitive sophistication or quantitative reasoning skills are most susceptible to motivated reasoning about climate science.”
While the above example is about conservatives, liberals also have their own confirmation bias dilemmas. This behavior is also linked to primal human behaviors, where, in order to join a social group, humans had to assimilate the group’s beliefs and habits. Personally held prejudices do affect factual beliefs and these can be anything from politics, religion, etc.
Unwelcome information also increases people to cling to wrong information. Anything that threatens an established system encourages close minded thinking. This also gives rise to deniers and conspiracy theories that can also be regarded as fact, when there is not any information to support it.
It is basic human behavior to reject anything that threatens strongly held interests, dogmas, or creeds giving way to denial. Politicians manipulate that behavior to their benefit and the average individual does not realize it. “Waking up “ or becoming aware how the human brain works in relation to confirmation bias is key to overcoming false facts.
Whitney Grace, March 11, 202
Facebook Is Definitely Evil: Plus or Minus Three Percent at a 95 Percent Confidence Level
March 2, 2020
The Verge Tech Survey 2020 allegedly and theoretically reveals the deepest thoughts, preferences, and perceptions of people in the US. The details of these people are sketchy, but that’s not the point of the survey. The findings suggest that Facebook is a problem. Amazon is a problem. Other big tech companies are problems. Trouble right here is digital city.
The survey findings come from a survey of 1123 people “nationally representative of the US.” There was no information about income, group with which the subject identifies, or methodology. But the result is a plus or minus three percent at a 95 percent confidence level. That sure seems okay despite DarkCyber’s questions about:
- Sample selection. Who pulled the sample, from where, were people volunteers, etc.
- “Nationally representative” means what? Was it the proportional representation method? How many people from Montana and the other “states”? What about Puerto Rico? Who worked for which company?
- Plus or minus three percent. That’s a swing at a 95 percent confidence level. In terms of optical character recognition that works out to three to six errors per page about 95 percent of the time. Is this close enough for a drone strike or an enforcement action. Oh, right, this is a survey about big tech. Big tech doesn’t think the DarkCyber way, right?
- What were the socio economic strata of the individuals in the sample?
What’s revealed or discovered?
First, people love most of the high profile “names” or “brands.” Amazon is numero uno, the Google is number two, and YouTube (which is the Google in case you have forgotten is number three. So far, the data look like a name recognition test. “Do you prefer this unknown lye soap or Dove?” Yep, people prefer Dove. But lye soap may be making a come back.
The stunning finding is that Facebook and Twitter impact society in a negative way. Contrast this to lovable Google and Amazon, 72 percent are favorable to the Google and 70 percent are favorable to Amazon.
Here’s the data about which companies people trust. Darned Amazing. People trust Microsoft and Amazon the most.
Which companies do the homeless and people in rural West Virginia trust?
Plus 72 percent of the sample believe Facebook has too much “power.” What does power mean? No clue for the context of this survey.
Gentle reader, please, examine the article containing these data. I want to go back in time and reflect on the people who struggled in my statistics classes. Painful memories but I picked up some cash tutoring. I got out of that business because some folks don’t grasp numerical recipes.
Stephen E Arnold, March 2, 20020
Social Media Versus a Nation State: Pakistan Versus the Facebook and Friends
February 29, 2020
DarkCyber believes that collisions of conscience and money will become more frequent in 2020. “Facebook, Twitter, Google Threaten to Suspend Services in Pakistan” explains that the Asia Internet Coalition does not want a nation state to get in the way of the standard operating procedures for US companies. Imagine. A country telling US firms what’s okay and what’s not okay. A mere country!
The government of Pakistan’s social media position is reflected in this passage from the article:
The new set of regulations makes it compulsory for social media companies to open offices in Islamabad, build data servers to store information and take down content upon identification by authorities. Failure to comply with the authorities in Pakistan will result in heavy fines and possible termination of services.
The consequences of ignoring the nation state’s approach to social media are not acceptable to the US companies. Pakistan’s ideas are easy to understand:
According to the law, authorities will be able to take action against Pakistanis found guilty of targeting state institutions at home and abroad on social media.
The law will also help the law enforcement authorities obtain access to data of accounts found involved in suspicious activities. It would be the said authority`s prerogative to identify objectionable content to the social media platforms to be taken down. In case of failure to comply within 15 days, it would have the power to suspend their services or impose a fine worth up to 500 million Pakistani rupees ($3 million).
DarkCyber finds it interesting that three high profile social media companies have formed some sort of loose federation in order to catch Pakistan’s attention.
Will the play work? Will other countries fall in line with the social media firms’ ideas of what’s acceptable and what’s not? Will China, Russia, and their client states for with the social media flow or resist? Are the US companies unreasonable?
Interesting questions.
Stephen E Arnold, February 29, 2020
Twitter: Embracing Management Maturity?
January 20, 2020
Twitter has a new initiative in 2020 to keep academic researchers honest, although it is not advertised in that manner. TechCrunch shares the details in the article, “Twitter Offers More Support To Researchers-To ‘Keep Us Accountable.’” Twitter’s new support for academic researchers is a new hub called “Twitter Data for Academic Researchers” and it has easier access to Twitter’s information and support about its APIs. Within the hub, one can apply for a developer account, links for researcher tools, and information about the APIs Twitter offers.
Twitter apparently added the Twitter Data for Academic Researchers hub this year based off researchers’ demands. The social media platform states they want to encourage communication and offer more support between developers. One reason Twitter wants more transparency and easier communication with its developers is due to the United States’s 2020 presidential election. Twitter, like most social media platforms, wants to cut down the number of bots and/or false news reports that effected the 2016 election. There is also the need to tamper down these accounts on a regular basis:
“Tracking conversation flow on Twitter also still means playing a game of ‘bot or not’ — one that has major implications for the health of democracies. And in Europe Twitter is one of a number of platform giants which, in 2018, signed up to a voluntary Code of Practice on disinformation that commits it to addressing fake accounts and online bots, as well as to empowering the research community to monitor online disinformation via “privacy-compliant” access to platform data.”
Twitter wants to support its developer community, but the transparency also makes it easier for Twitter to hold people responsible for their actions. They are keeping tabs on how their technology is used, while also assisting developers with their work. It is a great idea and if trouble arises, it might make it easier to track down the bad actors who started the mess. It is also another score for Twitter, because Facebook does not support academics well. Facebook has altered its APIs for researchers and Facebook does not want to stop false information spreading.
Whitney Grace, January 20, 2020
Bye-Bye Apple Store Reviews And Ratings
December 17, 2019
Apple makes products which inspire some to loyalty. Apple believes it knows best too.
Some believe the Mac operating system is superior to Windows 10 and Linux in virus protection, performance, and longevity.
Is Apple perfect? Sure, to a point. But the company can trip over its own confidence. One good thing about Apple is that it is known for good customer service, acceptance of negative feedback, and allowing customers to review and rate products on the Apple Store. In an business move inspired by Apple’s changing of its maps in Russia, the Apple Insider states that, “Apple Pulls All Customer Reviews From Online Apple Store.”
On Apple’s online retail stores, all of the user review pages have been removed from the US, Australian, and UK Web sites. Apple has been praised for its transparency and allowing users to post negative reviews on the official Apple store. If Apple makes this a a business practice, it could lose its congenial reputation.
Apple Insider used the Wayback Machine and discovered that the reviews were pulled sometime between the evening of November 16 and morning of November 17. Despite all of Apple’s negative reviews, the company can withstand a little negativity and does not even pay attention to many of them:
“A YouTube video offered as part of the tip was published by the popular photography account, Fstoppers, titled “Apple Fanboys, Where is your God now?” In the video, the host reads a selection of negative reviews of the new 16-inch MacBook Pro with the video published on November 16, coinciding with the removal of the website feature.
However, it remains to be seen if the video had anything to do with Apple’s decision to remove the reviews, given the 56 thousand page views at the time of publication doesn’t seem like a high-enough number for Apple to pay attention to the video’s content. Other videos have been more critical about the company’s products, and some with far higher view counts, but evidently Apple seemingly does not spend that much time involving itself with such public complaints.”
The fact is that Apple makes some $60,000 pro products and if just plain old people have problems, those happy buyers can visit Apple stores and search for a Genius to resolve them.
If Apple cannot fix the problems, a few believers might complain, move on, and then buy the next Apple product. Then the next one and the next and the next… Reviews are not necessary, right?
Whitney Grace, December 17, 2019
China Develops Suicide Detecting AI Bot
December 10, 2019
Most AI bots are used for customer support, massive online postings, downloading stuff, and criminal mischief. China has found another use for AI bots: detecting potential suicides. The South China Morning Post shared the article, “This AI Bot Finds Suicidal Messages On China’s Weibo, Helping Volunteer Psychologists Save Lives.” Asian countries have some of the world’s highest suicide rates. In order to combat death, Huang Zhisheng created the Tree Hole bot in 2018 to detect suicidal messages on Weibo, the Chinese equivalent of Twitter. Tree Hole bot finds potential suicide victims posting on Weibo, then connects them with volunteers to discuss their troubles. Huang has prevented more than one thousand suicides.
In 2016, 136,000 people committed suicide in China, which was 17% of world’s suicides that year. The World Health Organization states that suicide is the second leading cause of death in people ages 15-29. Other companies like Google, Facebook, and Pinterest have used AI to detect potential suicidal or self-harmers, but one of the biggest roadblocks are privacy concerns. Huang notes that saving lives is more important than privacy.
The Tree Hole bot works differently from other companies to find alarming notes:
“The Tree Hole bot automatically scans Weibo every four hours, pulling up posts containing words and phrases like “death”, “release from life”, or “end of the world”. The bot draws on a knowledge graph of suicide notions and concepts, applying semantic analysis programming so it understands that “not want to” and “live” in one sentence may indicate suicidal tendency.
In contrast, Facebook trains its AI suicide prevention algorithm by using millions of real world cases. From April to June, the social media platform handled more than 1.5 million cases of suicide and self-injury content, more than 95 per cent of which were detected before being reported by a user. For the 800,000 examples of such content on Instagram during the same period, 77 per cent were first flagged by the AI system first, according to Facebook, which owns both platforms.”
Assisting potential suicide victims is time consuming and Huang is developing a chatbot that can hopefully take the place of Tree Hole volunteers. Mental health professionals argue that an AI bot cannot take the place of a real human and developers point out there is not enough data to make an effective therapy bot.
Suicide prevention AI bots are terrific, but instead of making them volunteer only would it be possible, at least outside of China to make a non-profit organization staffed by professionals and volunteers?
Whitney Grace, December 10, 2019