GeoSpark Analytics: Real Time Analytics

April 6, 2020

In late 2017, OGSystems chopped out some of the firm’s analytics capabilities. The new company was Geospark Analytics. The service provided enabled customers like the US Department of Defense and FEMA to obtain information about important new events. “Events” is jargon for an alert plus data about something that is important.

“FEMA Contractor Tracing Coronavirus Deaths Uses Web Scraping, Social Media Monitoring” explains one use of the system. The write up says:

Geospark Analytics combines machine learning and big data to analyze events in real-time and warn of potential disruptions to the businesses of high-dollar private and public clientele…

Like Bluedot in Canada, Geospark was one of the monitoring companies analyzing open source and some specialized data to find interesting events. The write up continues:

Geospark Analytics’ product, called Hyperion, the namesake of the Titan son of Uranus (meaning, “watcher from above”), fingered Wuhan as a “hotspot,” in the company’s parlance, within hours after news of the virus first broke. “Hotspots tracks normal patterns of activity across the globe and provides a visual cue to flag disruptive events that could impact your employees, operations, and investments and result in billions of dollars in economic losses,” the company’s website says.

Engadget points out that there are a couple of companies with the name “Geospark.” DarkCyber finds this interesting. This statement provides more color about the Geospark approach:

Geospark Analytics claims to have processed “6.8 million” sources of information; everything from tweets to economic reports. “We geo-position it, we use natural language processing, and we have deep learning models that categorize the data into event and health models,” Goolgasian [Geospark’s CEO] said. It’s through these many millions of data points that the company creates what it calls a “baseline level of activity” for specific regions, such as Wuhan. A spike of activity around any number of security-, military-, or health-related topics and the system flags it as a potential disruption.

How does Geospark avoid the social media noise, bias, and disinformation that finds its way into open source content? The article states:

“We rely more on traditional data sources and we don’t do anything that isn’t publicly available,” Goolgasian said, echoing a common refrain among data firms that fuel surveillance products by mining the internet itself.

Providing specialized services to government agencies is not much of a surprise in DarkCyber’s opinion. Financial firms can also be avid consumers of real-time data. The idea is to get the jump on the competition which probably has its own source of digital insights.

Other observations:

  • The apparent “surprise” threading through the Engadget article is a bit off putting. DarkCyber is aware of a number of social media and specialized content monitoring services. In fact, there is a surplus of these operations and not all will survive in the present business climate.
  • Detecting and alerting are helpful but the messengers failed to achieve impact. How does DarkCyber know? Well, there is the lockdown.
  • Publicizing what companies like Geospark and others do to generate income can have interesting consequences.

Net net: Some types of specialized services are difficult to explain in a way that reduces blowback. Some of the blowback have significant impact on social media analytics companies. The Geofeedia case is a reminder. I know. I know. “What’s a Geofeedia some may ask?”

Good question and DarkCyber thinks few know the answer. Plucking insights from information many people believe to be privileged can be fraught with business shock waves.

Stephen E Arnold, April 6, 2020

Cambridge Analytica Alum: Social Media Is Like Bad, You Know

April 4, 2020

A voice of (in)experience describes how tech companies can be dangerous when left unchecked. Channel News Asia reports, “Tech Must Be Regulated Like Tobacco, says Cambridge Analytica Whistleblower.” Christopher Wylie is the data scientist who exposed Cambridge Analytica’s use of Facebook data to manipulate the 2016 presidential election, among others. He declares society has yet to learn the lesson of that scandal. Yes, Facebook was fined a substantial sum, but it and other tech giants continue to operate with little to no oversight. The article states:

“Wylie details in his book how personality profiles mined from Facebook were weaponised to ‘radicalise’ individuals through psychographic profiling and targeting techniques. So great is their potential power over society and people’s lives that tech professionals need to be subject to the same codes of ethics as doctors and lawyers, he told AFP as his book was published in France. ‘Profiling work that we were doing to look at who was most vulnerable to being radicalised … was used to identify people in the US who were susceptible to radicalisation so that they could be encouraged and catalysed on that path,’ he said. ‘You are being intentionally monitored so that your unique biases, your anxieties, your weaknesses, your needs, your desires can be quantified in such a way that a company can seek to exploit that for profit,’ said the 30-year-old. Wylie, who blew the whistle to British newspaper, The Guardian, in Mar 2018, said at least people now realise how powerful data can be.”

As in any industry, tech companies are made up of humans, some of whom are willing to put money over morality. And as in other consequential industries like construction, engineering, medicine, and law, Wylie argues, regulations are required to protect consumers from that which they do not understand.

Cynthia Murrell, April 4, 2020

Biased? You Betcha

March 11, 2020

Fact checkers probably have one of the hardest jobs, especially with today’s 24/7 access news stream. Determining what the facts are is difficult and requires proper research. Fact checkers, however, have a tougher nut to crack with confirmation bias a.k.a. this article from Nieman Lab: “The Fact-Checker’s Dilemma: Humans Are Hardwired To Dismiss Facts That Don’t Fit Their Worldview.”

The article opens with a poignant statement about polarized, insulated ideological communities ratified by their own beliefs. Some examples of these communities are autism is caused by vaccines, global warming is a hoax, and different political mish mash.

Refuting false information should be simple, especially with cold, hard facts, but that is not the case. Political, religion, ethnicity, nationality, and other factors influence how and what people believe. What is the cause behind this behavior?

“The interdisciplinary study of this phenomenon has exploded over just the past six or seven years. One thing has become clear: The failure of various groups to acknowledge the truth about, say, climate change, isn’t explained by a lack of information about the scientific consensus on the subject. Instead, what strongly predicts denial of expertise on many controversial topics is simply one’s political persuasion.”

What is astonishing is this:

“A 2015 metastudy showed that ideological polarization over the reality of climate change actually increases with respondents’ knowledge of politics, science, and/or energy policy. The chances that a conservative is a climate change denier is significantly higher if he or she is college-educated. Conservatives scoring highest on tests for cognitive sophistication or quantitative reasoning skills are most susceptible to motivated reasoning about climate science.”

While the above example is about conservatives, liberals also have their own confirmation bias dilemmas. This behavior is also linked to primal human behaviors, where, in order to join a social group, humans had to assimilate the group’s beliefs and habits. Personally held prejudices do affect factual beliefs and these can be anything from politics, religion, etc.

Unwelcome information also increases people to cling to wrong information. Anything that threatens an established system encourages close minded thinking. This also gives rise to deniers and conspiracy theories that can also be regarded as fact, when there is not any information to support it.

It is basic human behavior to reject anything that threatens strongly held interests, dogmas, or creeds giving way to denial. Politicians manipulate that behavior to their benefit and the average individual does not realize it. “Waking up “ or becoming aware how the human brain works in relation to confirmation bias is key to overcoming false facts.

Whitney Grace, March 11, 202

Facebook Is Definitely Evil: Plus or Minus Three Percent at a 95 Percent Confidence Level

March 2, 2020

The Verge Tech Survey 2020 allegedly and theoretically reveals the deepest thoughts, preferences, and perceptions of people in the US. The details of these people are sketchy, but that’s not the point of the survey. The findings suggest that Facebook is a problem. Amazon is a problem. Other big tech companies are problems. Trouble right here is digital city.

The survey findings come from a survey of 1123 people “nationally representative of the US.” There was no information about income, group with which the subject identifies, or methodology. But the result is a plus or minus three percent at a 95 percent confidence level. That sure seems okay despite DarkCyber’s questions about:

  • Sample selection. Who pulled the sample, from where, were people volunteers, etc.
  • “Nationally representative” means what? Was it the proportional representation method? How many people from Montana and the other “states”? What about Puerto Rico? Who worked for which company?
  • Plus or minus three percent. That’s a swing at a 95 percent confidence level. In terms of optical character recognition that works out to three to six errors per page about 95 percent of the time. Is this close enough for a drone strike or an enforcement action. Oh, right, this is a survey about big tech. Big tech doesn’t think the DarkCyber way, right?
  • What were the socio economic strata of the individuals in the sample?

What’s revealed or discovered?

First, people love most of the high profile “names” or “brands.” Amazon is numero uno, the Google is number two, and YouTube (which is the Google in case you have forgotten is number three. So far, the data look like a name recognition test. “Do you prefer this unknown lye soap or Dove?” Yep, people prefer Dove. But lye soap may be making a come back.

The stunning finding is that Facebook and Twitter impact society in a negative way. Contrast this to lovable Google and Amazon, 72 percent are favorable to the Google and 70 percent are favorable to Amazon.

Here’s the data about which companies people trust. Darned Amazing. People trust Microsoft and Amazon the most.

image

Which companies do the homeless and people in rural West Virginia trust?

Plus 72 percent of the sample believe Facebook has too much “power.” What does power mean? No clue for the context of this survey.

Gentle reader, please, examine the article containing these data. I want to go back in time and reflect on the people who struggled in my statistics classes. Painful memories but I picked up some cash tutoring. I got out of that business because some folks don’t grasp numerical recipes.

Stephen E Arnold, March 2, 20020

Social Media Versus a Nation State: Pakistan Versus the Facebook and Friends

February 29, 2020

DarkCyber believes that collisions of conscience and money will become more frequent in 2020. “Facebook, Twitter, Google Threaten to Suspend Services in Pakistan” explains that the Asia Internet Coalition does not want a nation state to get in the way of the standard operating procedures for US companies. Imagine. A country telling US firms what’s okay and what’s not okay. A mere country!

The government of Pakistan’s social media position is reflected in this passage from the article:

The new set of regulations makes it compulsory for social media companies to open offices in Islamabad, build data servers to store information and take down content upon identification by authorities. Failure to comply with the authorities in Pakistan will result in heavy fines and possible termination of services.

The consequences of ignoring the nation state’s approach to social media are not acceptable to the US companies. Pakistan’s ideas are easy to understand:

According to the law, authorities will be able to take action against Pakistanis found guilty of targeting state institutions at home and abroad on social media.

The law will also help the law enforcement authorities obtain access to data of accounts found involved in suspicious activities. It would be the said authority`s prerogative to identify objectionable content to the social media platforms to be taken down. In case of failure to comply within 15 days, it would have the power to suspend their services or impose a fine worth up to 500 million Pakistani rupees ($3 million).

DarkCyber finds it interesting that three high profile social media companies have formed some sort of loose federation in order to catch Pakistan’s attention.

Will the play work? Will other countries fall in line with the social media firms’ ideas of what’s acceptable and what’s not? Will China, Russia, and their client states for with the social media flow or resist? Are the US companies unreasonable?

Interesting questions.

Stephen E Arnold, February 29, 2020

Twitter: Embracing Management Maturity?

January 20, 2020

Twitter has a new initiative in 2020 to keep academic researchers honest, although it is not advertised in that manner. TechCrunch shares the details in the article, “Twitter Offers More Support To Researchers-To ‘Keep Us Accountable.’” Twitter’s new support for academic researchers is a new hub called “Twitter Data for Academic Researchers” and it has easier access to Twitter’s information and support about its APIs. Within the hub, one can apply for a developer account, links for researcher tools, and information about the APIs Twitter offers.

Twitter apparently added the Twitter Data for Academic Researchers hub this year based off researchers’ demands. The social media platform states they want to encourage communication and offer more support between developers. One reason Twitter wants more transparency and easier communication with its developers is due to the United States’s 2020 presidential election. Twitter, like most social media platforms, wants to cut down the number of bots and/or false news reports that effected the 2016 election. There is also the need to tamper down these accounts on a regular basis:

“Tracking conversation flow on Twitter also still means playing a game of ‘bot or not’ — one that has major implications for the health of democracies. And in Europe Twitter is one of a number of platform giants which, in 2018, signed up to a voluntary Code of Practice on disinformation that commits it to addressing fake accounts and online bots, as well as to empowering the research community to monitor online disinformation via “privacy-compliant” access to platform data.”

Twitter wants to support its developer community, but the transparency also makes it easier for Twitter to hold people responsible for their actions. They are keeping tabs on how their technology is used, while also assisting developers with their work. It is a great idea and if trouble arises, it might make it easier to track down the bad actors who started the mess. It is also another score for Twitter, because Facebook does not support academics well. Facebook has altered its APIs for researchers and Facebook does not want to stop false information spreading.

Whitney Grace, January 20, 2020

Bye-Bye Apple Store Reviews And Ratings

December 17, 2019

Apple makes products which inspire some to loyalty. Apple believes it knows best too.

Some believe the Mac operating system is superior to Windows 10 and Linux in virus protection, performance, and longevity.

Is Apple perfect? Sure, to a point. But the company can trip over its own confidence. One good thing about Apple is that it is known for good customer service, acceptance of negative feedback, and allowing customers to review and rate products on the Apple Store. In an business move inspired by Apple’s changing of its maps in Russia, the Apple Insider states that, “Apple Pulls All Customer Reviews From Online Apple Store.”

On Apple’s online retail stores, all of the user review pages have been removed from the US, Australian, and UK Web sites. Apple has been praised for its transparency and allowing users to post negative reviews on the official Apple store. If Apple makes this a a business practice, it could lose its congenial reputation.

Apple Insider used the Wayback Machine and discovered that the reviews were pulled sometime between the evening of November 16 and morning of November 17. Despite all of Apple’s negative reviews, the company can withstand a little negativity and does not even pay attention to many of them:

“A YouTube video offered as part of the tip was published by the popular photography account, Fstoppers, titled “Apple Fanboys, Where is your God now?” In the video, the host reads a selection of negative reviews of the new 16-inch MacBook Pro with the video published on November 16, coinciding with the removal of the website feature.

However, it remains to be seen if the video had anything to do with Apple’s decision to remove the reviews, given the 56 thousand page views at the time of publication doesn’t seem like a high-enough number for Apple to pay attention to the video’s content. Other videos have been more critical about the company’s products, and some with far higher view counts, but evidently Apple seemingly does not spend that much time involving itself with such public complaints.”

The fact is that Apple makes some $60,000 pro products and if just plain old people have problems, those happy buyers can visit Apple stores and search for a Genius to resolve them.

If Apple cannot fix the problems, a few believers might complain, move on, and then buy the next Apple product. Then the next one and the next and the next… Reviews are not necessary, right?

Whitney Grace, December 17, 2019

China Develops Suicide Detecting AI Bot

December 10, 2019

Most AI bots are used for customer support, massive online postings, downloading stuff, and criminal mischief. China has found another use for AI bots: detecting potential suicides. The South China Morning Post shared the article, “This AI Bot Finds Suicidal Messages On China’s Weibo, Helping Volunteer Psychologists Save Lives.” Asian countries have some of the world’s highest suicide rates. In order to combat death, Huang Zhisheng created the Tree Hole bot in 2018 to detect suicidal messages on Weibo, the Chinese equivalent of Twitter. Tree Hole bot finds potential suicide victims posting on Weibo, then connects them with volunteers to discuss their troubles. Huang has prevented more than one thousand suicides.

In 2016, 136,000 people committed suicide in China, which was 17% of world’s suicides that year. The World Health Organization states that suicide is the second leading cause of death in people ages 15-29. Other companies like Google, Facebook, and Pinterest have used AI to detect potential suicidal or self-harmers, but one of the biggest roadblocks are privacy concerns. Huang notes that saving lives is more important than privacy.

The Tree Hole bot works differently from other companies to find alarming notes:

“The Tree Hole bot automatically scans Weibo every four hours, pulling up posts containing words and phrases like “death”, “release from life”, or “end of the world”. The bot draws on a knowledge graph of suicide notions and concepts, applying semantic analysis programming so it understands that “not want to” and “live” in one sentence may indicate suicidal tendency.

In contrast, Facebook trains its AI suicide prevention algorithm by using millions of real world cases. From April to June, the social media platform handled more than 1.5 million cases of suicide and self-injury content, more than 95 per cent of which were detected before being reported by a user. For the 800,000 examples of such content on Instagram during the same period, 77 per cent were first flagged by the AI system first, according to Facebook, which owns both platforms.”

Assisting potential suicide victims is time consuming and Huang is developing a chatbot that can hopefully take the place of Tree Hole volunteers. Mental health professionals argue that an AI bot cannot take the place of a real human and developers point out there is not enough data to make an effective therapy bot.

Suicide prevention AI bots are terrific, but instead of making them volunteer only would it be possible, at least outside of China to make a non-profit organization staffed by professionals and volunteers?

Whitney Grace, December 10, 2019

Comedian Meets Times of Israel: A Draw?

December 3, 2019

News about Borat – I mean Baron Cohen – has been flowing. A US pundit gushed over a speech by Mr. Cohen, a comedian. The Times of Israel took another approach to the Cohen critique of social media. “It’s not Facebook, Sacha, It’s Humanity” stated:

Baron Cohen is charismatic, well-spoken, and appeals to the most primal emotion of all mankind: fear. This makes him an excellent propagandist. That does not mean he is completely wrong, but it does not make him entirely right, either.

DarkCyber noted this statement in the write up:

The main argument Baron Cohen made in his speech, which is neither original nor new, is that social media platforms do not assume the mantle of preventing the numerous lies and profound hatred that is disseminated through them. Baron Cohen echoed the global criticism of the ease by which one can spread conspiracy theories, invent news headlines, make up figures, and incite against sectors, genders, minorities, and religions.

The write up pointed out:

Human history shows that where information does not flow freely and in times when information is blocked by geographical barriers and can only slowly creep out to the rest of the world — that is when the worst kind of atrocities take place.

Is there a fix? The article suggests:

There are many issues with the way Facebook, Twitter, and Google operate, but very few of them stem from the lack of regulation of the content posted on them. If anything, it would be much more practical and appropriate to review these companies’ conduct as service providers.

The comedian or the journalist? Where does the truth set up a camp site?

Stephen E Arnold, December 3, 2019

Can Machine Learning Pick Out The Bullies?

November 13, 2019

In Walt Disney’s 1942 classic Bambi, Thumper the rabbit was told, “If you can’t say something nice, don’t say nothing at all.”

Poor grammar aside, the thumping rabbit did delivered wise advice to the audience. Then came the Internet and anonymity, when the trolls were released to the world. Internet bullying is one of the world’s top cyber crimes, along with identity and money theft. Passionate anti-bullying campaigners, particularly individuals who were cyber-bullying victims, want social media Web sites to police their users and prevent the abusive crime. Trying to police the Internet is like herding cats. It might be possible with the right type of fish, but cats are not herd animals and scatter once the tasty fish is gone.

Technology might have advanced enough to detect bullying and AI could be the answer. Innovation Toronto wrote, “Machine Learning Algorithms Can Successfully Identify Bullies And Aggressors On Twitter With 90 Percent Accuracy.” AI’s biggest problem is that algorithms can identify and harvest information, they lack the ability to understand emotion and context. Many bullying actions on the Internet are sarcastic or hidden within metaphors.

Computer scientist Jeremy Blackburn and his team from Binghamton University analyzed bullying behavior patterns on Twitter. They discovered useful information to understand the trolls:

“ ‘We built crawlers — programs that collect data from Twitter via variety of mechanisms,’ said Blackburn. ‘We gathered tweets of Twitter users, their profiles, as well as (social) network-related things, like who they follow and who follows them.’ ”

The researchers then performed natural language processing and sentiment analysis on the tweets themselves, as well as a variety of social network analyses on the connections between users. The researchers developed algorithms to automatically classify two specific types of offensive online behavior, i.e., cyber bullying and cyber aggression. The algorithms were able to identify abusive users on Twitter with 90 percent accuracy. These are users who engage in harassing behavior, e.g. those who send death threats or make racist remarks to users.

“‘In a nutshell, the algorithms ‘learn’ how to tell the difference between bullies and typical users by weighing certain features as they are shown more examples,’ said Blackburn.”

Blackburn and his teams’ algorithm only detects the aggressive behavior, it does not do anything to prevent cyber bullying. The victims still see and are harmed by the comments and bullying users, but it does give Twitter a heads up on removing the trolls.

The anti-bullying algorithm prevents bullying only after there are victims. It does little assist the victims, but it does prevent future attacks. What steps need to be taken to prevent bullying altogether? Maybe schools need to teach classes on Internet etiquette with the Common Core, then again if it is not on the test it will not be in a classroom.

Whitney Grace, November 13, 2019

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta