Cambridge: We Do It Huawei

September 28, 2021

Intelligence agencies are aware China has been ramping up its foreign espionage efforts, largely through civilian operatives. Now The Statesman reports, “Huawei Infiltrates Cambridge University.” We wonder what other universities have also been targeted. Perhaps our neighbor, the University of Tennessee at Knoxville? That institution not too far from an interesting government operation.

Huawei is China’s mammoth technology company and is largely viewed as a security threat, operating on behalf of the Chinese government. The U.S. maintains sanctions against the company and several countries have banned Huawei’s 5G technology over security concerns. The article tells us:

“Huawei has been accused of ‘infiltrating’ a Cambridge University research centre after most of its academics were found to have ties with the Chinese company, The Times, UK reported. Three out of four of the directors at the Cambridge Centre for Chinese Management (CCCM) have ties to the company, and its so-called chief representative is a former senior Huawei vice-president who has been paid by the Chinese government. The university insists that one former Huawei executive has never delivered services to the centre while the firm itself has said any suggestion of impropriety is absurd. Daily Mail reported that critics have claimed that the Huawei ties are a demonstration that the university has allowed the CCCM to be infiltrated by the Chinese company which has been banned from joining Britain’s 5G network. Johnny Patterson, policy director of the Hong Kong campaign group, told the newspaper the university should investigate the relationship between Huawei and the CCCM.”

Not surprisingly, money appears to be a factor. British politician Iain Duncan Smith asserts Cambridge has become reliant on Chinese funding in recent years. He proposes an inquiry into the role of Chinese funding throughout UK institutions and companies. We wonder how many other countries are seeing a similar pattern. It China trying to buy its way into world dominance? Is it working?

Cynthia Murrell, September 28, 2021

Life Long Learning or Else

September 28, 2021

Everyone wants to reduce stress, have “quality time”, and do the hybrid work thing with as much flexibility possibility. There’s something to fill the void. Navigate to “The Future of Work: Can You Adapt Fast Enough Before Becoming Unemployed?” The answer is, “Sure, there’s plenty of time in between Zooms, thumbtyping, and doom scrolling.

The write up states:

AI will also impact the future of your employment. A future where AI might give rise to market segregation of low-skill, low pay, and high-skill, high pay. The author Martin Ford predicts a growing inequality based on the hollowing out of job skills.

The expert offering this delightful vision for the Gen Xers is Martin Ford, who is a futurist, a TED talker, and the author of Architects of Intelligence (2018). He is quoted as saying:

Also, inequality can greatly increase as essentially what’s happening with artificial intelligence is that capital is displacing labor and of course capital is owned by very few people; wealthy people tend to own lots of capital, and most other people do not own much. Over time it makes our whole society more unequal. I think this is going to be a real challenge for us in the coming decades.

How does one get ahead of this eight ball? Easy pick a hot field like analytics and become an expert. Don’t like big data or smart software? You can become a management consultant.

Easy. Stress free. Lots of time for mobile device fiddling at a coffee shop.

Stephen E Arnold, September 27, 2021

Ethics Instruction: Who Knew?

September 24, 2021

Well, this is not particularly alarming. Despite increasing concern over the harm caused by unbridled algorithms, many AI students are still not being taught ethics in their coursework. The Next Web reports, “Data Science Students Don’t Know a Lot About Ethics–and That’s a Problem.” Ethical problem-solving is specifically mentioned in the National Academies recommend 10 training areas for data-science degrees. Considering the dramatic rise in students going into this field, the authors investigated the instruction undergraduates are receiving. They write:

“In our study, we compared undergraduate data science curricula with the expectations for undergraduate data science training put forth by the National Academies of Sciences, Engineering and Medicine. Those expectations include training in ethics. We found most programs dedicated considerable coursework to mathematics, statistics and computer science, but little training in ethical considerations such as privacy and systemic bias. Only 50% of the degree programs we investigated required any coursework in ethics. Why it matters: As with any powerful tool, the responsible application of data science requires training in how to use data science and to understand its impacts. Our results align with prior work that found little attention is paid to ethics in data science degree programs. This suggests that undergraduate data science degree programs may produce a workforce without the training and judgment to apply data science methods responsibly. … We believe explicit training in ethical practices would better prepare a socially responsible data science workforce.”

The study focused on R1 schools, or those with high levels of research activity. The authors note there may be more ethics instruction to be found at schools with lower levels of research or in graduate-level courses. It seems like more research is needed.

Cynthia Murrell, September 24, 2021

Apple, Facebook, and an Alleged Digital Trade for a Contentious Product

September 23, 2021

I read “Apple Threatened Facebook Ban over Slavery Posts on Instagram.” I have nothing but respect for the BBC, Brexit, and, of course, the Royals. I also believe everything I read online. (Doesn’t everyone?) Against this background, this BBC slavery write up is interesting indeed.

I read  this passage twice to make sure I was getting the message:

Apple threatened to remove Facebook’s products from its App Store, after the BBC found domestic “slaves” for sale on apps, including Instagram, in 2019. The threat was revealed in the Wall Street Journal’s (WSJ) Facebook Files, a series of reports based on its viewing of internal Facebook documents.

Okay. Slave trade. Facebook. Info from “internal Facebook documents.”

Here’s another passage I circled with my trusty red Sharpie Magnum marker:

The trade was carried out using a number of apps including Facebook-owned Instagram. The posts and hashtags used for sales were mainly in Arabic, and shared by users in Saudi Arabia and Kuwait.

Okay. Arabic. Saudi Arabia. Kuwait.

And the Sir Gawain in this matter? China-compliant Apple.

It [Murdoch owned Wall Street Journal] said the social media giant only took “limited action” until “Apple Inc. threatened to remove Facebook’s products from the App Store, unless it cracked down on the practice”.

I hear the digital French Foreign Legion’s tune Le Boudin. Do you?

And the good news? The BBC stated:

In its June 2020 response to these, Facebook wrote: “Following an investigation prompted by an inquiry from the BBC, we conducted a proactive review of our platform. We removed 700 Instagram accounts within 24 hours, and simultaneously blocked several violating hashtags.” The following month the company said it removed more than 130,000 pieces of Arabic-language speech content related to domestic servitude in Arabic on both Instagram and Facebook. It added that it had also developed technology that can proactively find and take action on content related to domestic servitude – enabling it to “remove over 4,000 pieces of violating organic content in Arabic and English from January 2020 to date”.

Interesting indeed. Slavery. Facebook. Social media. Prompt action documented. Apple the pointy end of the stick for justice. Possible vacation ideas for some. The BBC. And more. Quite a write up.

Stephen E Arnold, September 23, 2021

War Intrudes. Big Tech Is There

September 21, 2021

War is complicated. It has never been black and white, except when the cause is undeniably evil. One of the worst days in modern history was September 11, 2001. After the terrorist attack on the World Trade Center in New York City, the United States under President George W. Bush declared war on terrorism. The US already had tenuous relationships with Middle Eastern countries, but they became worse when US troops were deployed to Iraq and Afghanistan.

It has been twenty years since Osama bin Laden ordered the terrorist attack on the US . President Joe Biden removed American forces in Afghanistan, but that does not mean an end is in sight.

Big Tech Sells War managed by Crescendo, a project run by Little Sis, Action Center on Race and the Economy, and MPower Change. These are left-leaning organizations that fund and empower similarly aligned information sources. Big Tech Sells War explores how the US Department of Homeland Security and big tech companies, like Microsoft, Amazon, and Google, have perpetuated the global war on terror for profit. Other byproducts of this partnership, are spreading xenophobia, racism against brown people, implementation of US fascism, and promoting white supremacy. Here is the first passage form their about page:

“Since 2001 the “Global War on Terror” has become a household phrase that has set the political, economic, and ideological agenda for the US and its accomplices. The GWoT has done less to keep people safe from terror as it has to grow the reach of US militarism and imperialism and terrorize people across Southwest Asia to Africa, throughout the Global South, and here in the United States. The terrorizing of Black, Indigenous, and people of color (BIPOC) communities in the US by police is another expression of this ideological war that shares the same tools and strategies of surveillance and control.”

At first glance, Big Tech Starts War lists some thought-provoking arguments worth further research, but reading between the lines yields thought patterns similar to conspiracy theorists. True, some conspiracy theories might be true, but without the proper evidence they are little more than urban legends and junk.

Big Tech Starts Wars is an insular project run by organizations who manufacture their own data. The numbers and facts Big Tech Starts War lacks citations or links directly back to a parent organization. These parent organizations spurt reactionary facts through a biased lens. The LittleSis database contains relationship data between politicians and big tech names. There are even arrows and maps pointing out how they are connected!

Big Tech Starts war does not reference information other than itself. It is like the Scientologists claiming their “technology” stops wars and cures disease, when the big media outlets never report it. The same goes here. Yes, media outlets have their own agenda, but that is why you research multiple places to get a better understanding of the bigger picture.

Big Tech Starts War has teeny tiny granules of truth, but it’s like trying to separate salt and sugar with the naked eye.

Whitney Grace, September 21, 2021

Cough, Cough: A Phrase to Praise?

September 20, 2021

I read “Critics Warn of Apple, Google Chokepoint Repression.” The article contains a phrase which may become one to praise: “A convenient chokepoint.”

The write up is arriving a couple of decades too late. The chokepoints have been building, reinforcing, and lobbying for many years. Wall Street loves the Apples and Googles of the Silicon Valley money engines.

One doesn’t have to be much of a student of political science or have an MBA in nudging to figure out what’s going to happen. When threatened with financial loss, some of these outstanding American business entities will respond.

My hunch is that rolling over at the snap of fingers in China, Russia, and elsewhere will become predictable behavior. Instead of a treat, the obedient get to make money. The alternative is a kick in the digital ribs.

Stephen E Arnold, September 20, 2021

 

Facebook: A Pattern That Matches the Zuck

September 20, 2021

The laws of the United States (and most countries) are equally applied to everyone, unless you are rich and powerful. Facebook certainly follows this rule of thumb according to The Guardian article, “Facebook: Some High-Profile Users ‘Allowed To Break Platform’s Rules.’” Facebook has to sets of rules, one for high profile users and everyone else. The Wall Street Journal investigated Facebook’s special list.

Rich and powerful people’s profiles, such as politicians, journalists, and celebrities, are placed on a special list that exempts them from Facebook’s rules. The official terms for these shortlisted people are “newsworthy”, “influential or popular” or “PR risky” The special list is called the XCheck or “CrossCheck” system. Supposedly if these exempt people do post any rule breaking content, it is subject to review but that never happens. There are over 5.8 million people on the XCheck system and the list continues to grow:

The WSJ investigation details the process known as “whitelisting”, where some high-profile accounts are not subject to enforcement at all. An internal review in 2019 stated that whitelists “pose numerous legal, compliance, and legitimacy risks for the company and harm to our community”. The review found favouritism to those users to be both widespread and “not publicly defensible”.

Facebook said that the information The Wall Street Journal dug up were outdated and glosses over that the social media platform is actively working on these issues. Facebook is redesigning CrossCheck to improve the system.

Facebook is spouting nothing but cheap talk. Facebook and other social media platforms will allow rich, famous, and powerful people to do whatever they want on their platforms. It does not make sense why Facebook and other social media platforms allow this, unless money is involved.

Whitney Grace, September 20, 2021

Facebook: Continuous Reality Distortion

September 14, 2021

Facebook CEO Mark Zuckerberg stated in 2019 that WhatsApp was designed as a “privacy-focused vision” for communication. WhatsApp supposedly offers end-to-end encryption. ProPublica shares that is not true in, “How Facebook Undermines Privacy Protections For Its 2 Billion WhatsApp Users.” Essentially the majority of WhatsApp messages are private, but items users flag are sifted through WhatsApp employees.

These employees monitor the flagged messages for child pornography, terroristic plots, spam, and more. This type of monitoring appears contrary to WhatsApp’s mission, but Carl Woog, the director of communications, did not regard this as content monitoring and saw it as preventing abuse.

WhatsApp reviewers sign NDAs and, if asked, say they work for Accenture. They review over 600 violation tickets a day, leaving less than a minute for each one, then they decide if they should ban the account, put the user on “watch,” or do nothing. Reviewers are required to:

“WhatsApp moderators must make subjective, sensitive and subtle judgments, interviews and documents examined by ProPublica show. They examine a wide range of categories, including “Spam Report,” “Civic Bad Actor” (political hate speech and disinformation), “Terrorism Global Credible Threat,” “CEI” (child exploitative imagery) and “CP” (child pornography). Another set of categories addresses the messaging and conduct of millions of small and large businesses that use WhatsApp to chat with customers and sell their wares. These queues have such titles as “business impersonation prevalence,” “commerce policy probable violators” and “business verification.””

Unlike Facebook’s other platforms, Facebook and Instagram, WhatsApp does not release statistics about what data it collects, because it cites that its an encryption service. Facebook also needs WhatsApp to generate a profit, because the company spent $22 billion on it in 2014. WhatsApp does share data with Facebook, despite its dedication to privacy. Facebook also faced fines for violating user privacy. WhatsApp was used to collect data on criminals and governments want backdoors to access and trace data. It is for user safety, but governments can take observation too far.

Whitney Grace, September 14, 2021

Smart Software: Boiling Down to a Binary Decision?

September 9, 2021

I read a write up which contained a nuance which is pretty much a zero or a one; that is, a binary decision. The article is “Amid a Pandemic, a Health Care Algorithm Shows Promise and Peril.” Okay, good news and bad news. The subtitle introduces the transparency issue:

A machine learning-based score designed to aid triage decisions is gaining in popularity — but lacking in transparency.

The good news? A zippy name: The Deterioration Index. I like it.

The idea is that some proprietary smart software includes explicit black boxes. The vendor identifies the basics of the method, but does not disclose the “componentized” or “containerized” features. The analogy I use in my lectures is that no one pays attention to a resistor; it just does its job. Move on.

The write up explains:

The use of algorithms to support clinical decision making isn’t new. But historically, these tools have been put into use only after a rigorous peer review of the raw data and statistical analyses used to develop them. Epic’s Deterioration Index, on the other hand, remains proprietary despite its widespread deployment. Although physicians are provided with a list of the variables used to calculate the index and a rough estimate of each variable’s impact on the score, we aren’t allowed under the hood to evaluate the raw data and calculations.

From my point of view this is now becoming a standard smart software practice. In fact, when I think of “black boxes” I conjure an image of Stanford University and the University of Washington professors, graduate students, and Google-AI types which share these outfits’ DNA. Keep the mushrooms in the cave, not out in the sun’s brilliance. I could be wrong, of course, but I think this write up touches upon what may be a matter that some want to forget.

And what is this marginalized issue?

I call it the Timnit Gebru syndrome. A tiny issue buried deep in a data set or method assumed to be A-Okay may not be. What’s the fix? An ostrich-type reaction, a chuckle from someone with droit de seigneur? Moving forward because regulators and newly-minted government initiatives designed to examine bias in AI are moving with pre-Internet speed?

I think this article provides an interest case example about zeros and ones. Where’s the judgment? In a black box? Embedded and out of reach.

Stephen E Arnold, September 9, 2021

Change Is Coming But What about Un-Change?

September 8, 2021

My research team is working on a short DarkCyber video about automating work processes related to smart software. The idea is that one smart software system can generate an output to update another smart output system. The trend was evident more than a decade ago in the work of Dr. Zbigniew Michalewicz, his son, and collaborators. He is the author of How to Solve It: Modern Heuristics. There were predecessors and today many others following smart approaches to operations for artificial intelligence or what is called by thumbtypers AIOps. The DarkCyber video will become available on October 5, 2021. We’ll try to keep the video peppy because smart software methods are definitely exciting and mostly invisible. And like other embedded components, some of these “modules” will become components, commoditized, and just used “as is.” That’s important because who worries about a component in a larger system? Do you wonder if the microwave is operating at peak efficiency with every component chugging along up to spec? Nope and nope.

I read a wonderful example of Silicon Valley MBA thinking called “It’s Time to Say “Ok, Boomer!” to Old School Change Management.” At first glance, the ideas about efficiency and keeping pace with technical updates make sense. The write up states:

There are a variety of dated methods when it comes to change management. Tl;dr it’s lots of paper and lots of meetings. These practices are widely regarded as effective across the industry, but research shows this is a common delusion and change management itself needs to change.

Hasta la vista Messrs. Drucker and the McKinsey framework.

The write up points out that a solution is at hand:

DevOps teams push lots of changes and this is creating a bottleneck as manual change management processes struggle to keep up. But, the great thing about DevOps is that it solves the problem it creates. One of the key aspects where DevOps can be of great help in change management is in the implementation of compliance. If the old school ways of managing change are too slow why not automate them like everything else? We already do this for building, testing and qualifying, so why not change? We can use the same automation to record change events in real time and implement release controls in the pipelines instead of gluing them on at the end.

Does this seem like circular reasoning?

I want to point out that if one of the automation components operates using probability and the thresholds are incorrect, the data poisoned (corrupted by intent or chance) or the “averaging” which is a feature of some systems triggers a butterfly effect, excitement may ensue. The idea is that a small change may have a large impact downstream; for example, a wing flap in Biloxi could create a flood in the 28th Street Flatiron stop.

Several observations:

  • AIOps are already in operation at outfits like the Google and will be componentized in an AWS-style package
  • Embedded stuff, like popular libraries, are just used and not thought about. The practice brings joy to bad actors who corrupt some library offerings
  • Once a component is up and running and assumed to be okay, those modules themselves resist change. When 20 somethings encounter mainframe code, their surprise is consistent. Are we gonna change this puppy or slap on a wrapper? What’s your answer, gentle reader?

Net net: AIOps sets the stage for more Timnit Gebru shoot outs about bias and discrimination as well as the type of cautions produced by Cathy O’Neil in Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy.

Okay, thumbtyper.

Stephen E Arnold, September 8, 2021

Next Page »

  • Archives

  • Recent Posts

  • Meta