Palantir: Cambridge Analytica Secondary Shock Wave

April 19, 2018

Data analysis firm Palantir has come under scrutiny after it was learned that one of its employees contributed to Cambridge Analytica’s acquisition of private data back in 2013 and 2014. Now BuzzFeed News emphasizes, “Palantir Had No Policy on Social Media Data Collection Prior to 2015.” The company was used to working with internal data for organizations like the FBI and JPMorgan Chase, to name just a couple big-name examples, where the data is clearly their clients’ property. When Palantir began working with social-media data, it seems they failed to anticipate the need for a comprehensive policy. Reporter William Alden writes:

“Palantir insiders felt that the company’s ‘ad hoc’ approach to handling social media data for customers in general was ‘becoming unworkable,’ a senior engineer said in an October 2014 memo not related to Cambridge Analytica. Palantir took steps to develop a social media data policy in early 2015, soliciting input from employees who’d worked on customer accounts involving the use of such data, an email from that time shows. Palantir has said previously that its employee, Alfredas Chmieliauskas, advised the Cambridge Analytica team in ‘an entirely personal capacity’ from 2013 to 2014, and that Cambridge Analytica was never a Palantir customer. There is no indication in the documents seen by BuzzFeed News that the push by Palantir to develop the social media policy had anything to do with Cambridge Analytica. Rather, the push was tied to requests by Palantir’s customers to mine social data during a time when Facebook’s restrictions on accessing and gathering data were much looser.”

The article reveals a few more details about Palantir’s internal discussion, and reminds us that the prevailing attitude toward social-media data was much more relaxed then than it is today. We trust that the company has tightened up their policy since. Founded in 2004, Palantir is based in Palo Alto, California, and has offices around the world.

This alleged interaction may cause a gentle breeze or a cyclone. Stay tuned.

Cynthia Murrell, April 19, 2018

Social Media: Toxic for Children?

April 13, 2018

Facebook president’s marathon testimony sets the stage for some government action on the social media company. Bubbling beneath the surface, in my opinion, was the idea that Facebook influenced the 2016 presidential election, either wittingly or unwittingly. The math club culture, however, is pleased with its revenue, not the grilling.

Social media is a well-known grounds for toxic thought and behavior for adults. Shaming, bad mouthing, spreading rumors, and even more damaging acts have been attributed to Twitter, Facebook and the like. As bad as we know this world is, our children are experiencing just as nasty of an environment, one study suggests. We learned more in a recent Independent article, “Two in Five Children Made Anxious Every Week, When Using The Internet, Research Says.”

According to the story:

“Almost half of young people said that in the last year they had experienced someone being mean to them over the internet – or they had been excluded online, new research has revealed…. “Meanwhile, eight per cent of schoolchildren surveyed said these negative experiences happened to them all or most of the time, according to the poll.”

Sadly, this has become an unavoidable part of adolescence. It is impossible to shield children from this kind of behavior, but the Independent story doesn’t really offer a solution. Some experts have an interesting one: stay online. Much like standing up to a schoolyard bully in past years, this psychologist says children should not ignore or block a bully, but push back. Stand up for themselves and hopefully others will too, which will drive the bully off. It’s a bold thought for a problem that is dominating young minds today.

After more than a decade of “let them go,” some changes may be difficult because social media has transformed some hearts and minds.

Stephen E Arnold, April 13, 2018

Cambridge Analytica: The April 3, 2018, DarkCyber Report Is Now Available

April 3, 2018

DarkCyber for April 3, 2018, is now available. The new program can be viewed at www.arnoldit.com/wordpress and on Vimeo at https://vimeo.com/262710424.

This week’s program focuses on the Facebook, GSR, Cambridge Analytica data controversy. The 12 minute video addresses the role of GSR and the Cambridge professor who developed a personality profile app. The DarkCyber program outlines how raw social data is converted into actionable, psychographic “triggers.” By connecting individuals, groups, and super-groups with “hot buttons” and contentious political issues, behaviors can be influenced, often in an almost undetectable way.

The DarkCyber research team has assembled information from open source coverage of Cambridge Analytica and has created a generalized “workflow” for the Facebook-type data set. The outputs of the workflow are “triggers” which can be converted into shaped messages which are intended to influence behaviors of individuals, groiups, and super-groups.

The program explains how psychographic analyses differ from the more well known demographic analyses of Facebook data. The link analysis or social graph approach is illustrated in such a way that anyone can grasp the potential of this data outputs. The program includes a recommendation for software which anyone with basic programming skills can use to generate “graphs” of relationships, centers of influence, and individuals who are likely to take cues from these centers of influence.

DarkCyber’s next special feature focuses on the Grayshift GrayKey iPhone unlocking product. The air date will appear in Beyond Search.

Kenny Toth, April 3, 2018

DarkCyber Explores the Cambridge Analytica Matter

March 29, 2018

Short honk: The April 3, 2018, DarkCyber devotes the program to the Cambridge Analytica Matter. What makes this program different is the DarkCyber approach. The DarkCyber researchers examined open source information for factoids about how Cambridge Analytica created their “actionable” information for political clients. If you want to see a social media survey question can generate “triggers” to cause action via an image, a tweet, or blog post — tune in on April 3, 2018. Plus the program provides a link so you can download an application which can be used to generate “centers of influence”. Who knows? You could become the next big thing in content analysis and weaponizing information.

Make a note. On Tuesday, April 3, 2018, You will be able to view the video at www.arnoldit.com/wordpress or on Vimeo.

Kenny Toth, March 29, 2018

Cambridge Analytica and Fellow Travelers

March 26, 2018

I read Medium’s “Russian Analyst: Cambridge Analytica, Palantir and Quid Helped Trump Win 2016 Election.” Three points straight away:

  1. The write up may be a nifty piece of disinformation
  2. The ultimate source of the “factoids” in the write up may be a foreign country with interests orthogonal to those of the US
  3. The story I saw is dated July 2017, but dates – like other metadata – can be fluid unless in a specialized system which prevents after the fact tampering.

Against this background of what may be hefty problems, let me highlight several of the points in the write up I found interesting.

More than one analytics provider. The linkage of Cambridge Analytica, Palantir Technologies, and Quid is not a surprise. Multiple tools, each selected for its particular utility, are a best practice in some intelligence analytics operations.

A Russian source. The data in the write up appear to arrive via a blog by a Russian familiar with the vendors, the 2016 election, and how analytic tools can yield actionable information.

Attributing “insights.” Palantir allegedly output data which suggested that Mr. Trump could win “swing” states. Quid’s output suggested, “Focus on the Midwest.” Cambridge Analytica suggested, “Use Twitter and Facebook.”

If you are okay with the source and have an interest in what might be applications of each of the identified companies’ systems, definitely read the article.

On April 3, 2018, my April 3, 2018, DarkCyber video program focuses on my research team’s reconstruction of a possible workflow. And, yes, the video accommodates inputs from multiple sources. We will announce the location of the Cambridge Analytica, GSR, and Facebook “reconstruction” in Beyond Search.

Stephen E Arnold, March 26, 2018

Algorithm Positions Microsoft on Top of Global Tech Field

March 23, 2018

This is quite a surprise. Reporting the results of their own analysis, Reuters announces, “Microsoft Tops Thomson Reuters Top 100 Global Tech Leaders List.” The write-up tells us that, in second and third place, were:

… Chipmaker Intel and network gear maker Cisco Systems. The list, which aims to identify the industry’s top financially successful and organizationally sound organizations, features US tech giants such as Apple, Alphabet, International Business Machines and Texas Instruments, among its top 10. Microchip maker Taiwan Semiconductor Manufacturing, German business software giant SAP and Dublin-based consultant Accenture round out the top 10. The remaining 90 companies are not ranked, but the list also includes the world’s largest online retailer Amazon and social media giant Facebook.

 

The results are based on a 28-factor algorithm that measures performance across eight benchmarks: financial, management and investor confidence, risk and resilience, legal compliance, innovation, people and social responsibility, environmental impact, and reputation. The assessment tracks patent activity for technological innovation and sentiment in news and selected social media as the reflection of a company’s public reputation. The set of tech companies is restricted to those that have at least $1 billion in annual revenue.

That is an interesting combination of factors; I’d like to see that Venn diagram. Some trends emerged from the report. For example, 45 of those 100 companies are based in the US (but 47 in North America); 38 are headquartered in Asia, 14 in Europe, and one in Australia.

Cynthia Murrell, March 23, 2018

Twitter: Designed for Interesting Messaging

March 15, 2018

Fake news is getting harder to control and social media networks makes it harder to weed out the truth from the lies. Engadget shares how, “Twitter’s Fake News Problem Is Getting Worse” and how tragedy exacerbates the problem. For example, when a crazed shooter opened fired at a high school in Parkland, Florida, social media, including Twitter, helped spread fake news. The fake news misidentified the gunman, the number of gunmen, how a comedian was one of the shooters when it was just a meme, and misidentified missing people.

The problem is getting worse with doctored news stories, reporters being accused of false claims, and then misinformed readers reposting the information again and again. The most ironic thing is that this is what social media, especially Twitter was designed for:

“It’s just further evidence that Twitter’s fake-news problem is getting worse. After all, Twitter’s very nature is to spread information at lightning speed with little to no oversight. And ironically, it is this quality that brought Twitter to prominence in the first place. One of Twitter’s defining moments was when Janis Krum tweeted about U.S. Airways Flight 1549 landing in the Hudson River on January 15th, 2009 — he was the first to have reported it, and the tweet soon went viral. “It changed everything,” Twitter co-founder Jack Dorsey told CNBC in 2013. “Suddenly the world turned its attention because we were the source of news — and it wasn’t us, it was this person in the boat using the service.” Twitter was no longer just a place for discussing what you had for lunch. It became a place where you could get news from real people experiencing events first-hand, which was often faster than mainstream news.”

Using Twitter and other social media networks as news aggregators has brought a fresh perspective to Internet social interactions. Fake news is easily generated and shared not only by bots, but by Internet trolls and then multiplied by people who do not think critically about content.

Here’s a Beyond Search tip: Refrain from reposting anything you are unsure about, but most people do not have the filters nor the skills to distinguish fact from fiction. For many, their beliefs make the facts. Twitter and similar tools become easy to use amplifiers.

Twitter and other social networks do have a responsibility to curb the false news. In the past, old-fashioned newspapers were ideally held accountable and reporters strove to fact check all their articles. Whatever happened to the fact checking department? (Tip: Library reference desks might be an oasis for some fact checkers.) Maybe we need to create a tool that runs everything through Wikipedia first which seems to be the easy way for Google to wriggle off the hook for certain types of content.

Whitney Grace, March 15, 2018

Google Accused of Censorship

March 13, 2018

Google, Facebook, and other social media and news outlets are concerned with fake news.  They have taken preliminary measures to curb false, but Live Mint says, “Google Is Filtering News For The Wrong Reason.”  Google, like other news outlets and social media platforms, is a business. While it delivers products and services, its entire goal is to turn a profit.  Anything that affects the bottom line, such as false information, is deemed inappropriate.

Google deemed the Russian government-owned news Web sites RT and Sputnik as false information generators, so the search engine giant has reworked its ranking algorithm.  The new ranking algorithm pushes RT and Sputnik way down in news searches.  Live Mint explained that this made RT and Sputnik victims, but Google does not want to ban these Web sites.  Instead, Google has other ideas:

Schmidt’s words are a riff on an April post by Google vice president of engineering Ben Gomes, who teased changes to how Google searches for news. New instructions targeted “deceptive web pages” that look like news but seek to “manipulate users” with conspiracy theories, hoaxes, and inaccurate information. ‘We’ve adjusted our signals to help surface more authoritative pages and demote low-quality content,’ Gomes wrote.

The author makes a poignant argument about why it is bad for businesses to alter their services, such as a news aggregator, to avoid bad press and increase regulation on them.  He also argues that false information Web sites are harmful, but it is not Google’s responsibility to censor them.

It is a good point, but when people take everything printed on the Internet as fact someone has to take the moral argument to promote the truth.

Whitney Grace, March 13, 2018

Social Media: Toxic for Children

March 7, 2018

Social media is a well-known grounds for toxic thought and behavior for adults. Shaming, bad mouthing, spreading rumors, and even more damaging acts have been attributed to Twitter, Facebook and the like. As bad as we know this world is, our children are experiencing just as nasty of an environment, one study suggests. We learned more in a recent Independent article, “Two in Five Children Made Anxious Every Week, When Using The Internet, Research Says.”

According to the story:

“Almost half of young people said that in the last year they had experienced someone being mean to them over the internet – or they had been excluded online, new research has revealed.

“Meanwhile, eight per cent of schoolchildren surveyed said these negative experiences happened to them all or most of the time, according to the poll.”

Sadly, this has become an unavoidable part of adolescence. It is impossible to shield children from this kind of behavior, but the Independent story doesn’t really offer a solution. Some experts have an interesting one: stay online. Much like standing up to a schoolyard bully in past years, this psychologist says children should not ignore or block a bully, but push back. Stand up for themselves and hopefully others will too, which will drive the bully off. It’s a bold thought for a problem that is dominating young minds today.

Patrick Roland, March 7, 2018

Facebook Begins Censoring Content for Good and Ill

March 5, 2018

Facebook has been under a lot of scrutinies for fake news and propaganda lately. While the company has acknowledged its mistakes, the course it is taking to fix these problems should alarm people. We learned more on the social media giant’s censorship from a recent story in the Intercept, “Facebook Says It Is Deleting Accounts at the Direction of the U.S. and Israeli Governments.

According to the story:

Facebook has been on a censorship rampage against Palestinian activists who protest the decades-long, illegal Israeli occupation, all directed and determined by Israeli officials. Indeed, Israeli officials have been publicly boasting about how obedient Facebook is when it comes to Israeli censorship orders.

 

Shortly after news broke earlier this month of the agreement between the Israeli government and Facebook, Israeli Justice Minister Ayelet Shaked said Tel Aviv had submitted 158 requests to the social media giant over the previous four months asking it to remove content it deemed “incitement.” She said Facebook had granted 95 percent of the requests.

This is a no-win situation for Facebook. By trying to keep questionable content off the net, it opens the door for censoring its users. A slippery slope, to be sure. If we were to guess, Facebook will make a few more missteps before correcting things appropriately.

Patrick Roland, March 5, 2018

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta