France and US Businesses: Semi Permanent Immiscibility?

November 30, 2022

Unlike a pendulum, the French government and two US high-technology poster kids don’t see eye to eye. However, governments, particularly those in France, are not impressed with the business practices of some US firms. The tried and true “Senator, thank you for the question” and assurances that the companies in questions are following the ethical precepts of respected French philosophers don’t work. “France Directs Schools to Stop Using Microsoft Office & Google Workspace” reports:

In a recent response to an interrogation by a Member of the Parliament, the French Minister of Education clarified that French schools should not use Microsoft 365 and Google Workspace. The reasons behind the Ministry’s position are twofold. First, the Ministry is concerned about the confidentiality and lawfulness of data transfers. Second, reliance on European providers is coherent with the government’s “cloud at the center” policy.

The write up explains that France’s view of privacy and the practices of Apple and Google are not in sync. Then there is the issue of the cloud and where data and information “are.” Given modern network and data center technology, the “there” is often quite tricky to pin down. Tricky is not a word the current French government feels comfortable using when talking about schools, teachers, students, and research conducted by French universities.

How will this play out? France will get its way. That’s why some chickens have labels which mean conformance. No label on that chicken, no deal.

Stephen E Arnold, November 30, 2022

The Collision of Nation State Bias and High School Science Club Management

November 28, 2022

CNN offered some interesting pictures of the labor management misunderstanding in Zhengzhou, China. Even though I have been to China several times, I was not sure what made Zengzhou different from other “informed” cities struggling with what may be an ill-advised approach to Covid. In fact, the images of law enforcement and disgruntled individuals are not particularly unique. These images are more interesting when a blurry background of Apple and a Taiwanese company add a touch of chiaroscuro to the scenes.

What is interesting is that “Apple Has a Huge Problem with an iPhone Factory in China” mentions the “Taiwan contract manufacturing firm Foxconn.” CNN, however, does not offer any information about the involvement of individuals who want to create issues for Foxconn. China and Taiwan sort of coexist, but I am not certain that the Chinese provincial government either in Henan or the national government in Beijing are particularly concerned about what happens to either Apple of Foxconn.

The fact that workers suddenly became upset suggests that I have to exercise a willing suspension of disbelief and assume the dust up was spontaneous. Sorry, a “Hey, this just happened because of pay” or some similar dismissive comment won’t make me feel warm and fuzzy.

The write up asserts:

The Zhengzhou campus has been grappling with a Covid outbreak since mid-October that caused panic among its workers. Videos of people leaving Zhengzhou on foot went viral on Chinese social media in early November, forcing Foxconn to step up measures to get its staff back….  But on Tuesday [November 22, 2022] night, hundreds of workers, mostly new hires, began to protest against the terms of the payment packages offered to them and also about their living conditions. Scenes turned increasingly violent into the next day as workers clashed with a large number of security forces. By Wednesday [November 23] evening, the crowds had quieted, with protesters returning to their dormitories on the Foxconn campus after the company offered to pay the newly recruited workers 10,000 yuan ($1,400), or roughly two months of wages, to quit and leave the site altogether.

Seems straightforward. A  confluence of issues culminated in a protest.

Now let’s think about the issue this way. These are my working hypotheses.

First, Foxconn may not perceive the complaints of its employees as important. Sure, the factory workers have to do their job, but these are Chinese factory workers. Foxconn has a Taiwan spin. This may translate into Chinese government passivity. Let the Taiwan managers deal with the problems.

Second, Apple is a US outfit and it embraces some of the tenets of the high school science club management method. The kernel of the HSSCMM is that science club members know best. Others do not; therefore, if something is not on the radar of the science club, that “something” is irrelevant, silly, or just plain annoying.

Third, the workers have some awareness of the financial resources of Foxconn and Apple. Thus, like workers from an Apple store to the quiet halls of the Apple core spaceship, money talks.

Fourth, Covid. Yep, not going away it seems.

What happens when China is not too interested in Foxconn, Foxconn is not too interested in Chinese workers, and Apple is busy inventing ways to prevent people from upgrading the Mac computers?

That’s what CNN understands. Protests, clashes, and violence. Toss in some Covid fear and one has the exciting story for consumers of CNN “real” news.

Is there are fix? For China and its attitude to Taiwanese businesses which allegedly exploit Chinese workers, sure. I won’t explore that solution. For Foxconn, sure, but it will take time for Foxconn to de-China its production operations. For Apple, not really. The company will follow the logic of the science club: Find some people who will work for less.

Net net: Apple and its HSSCMM will probably not find too many fans in the Middle Kingdom. And Foxconn? Do China and Apple care?  Apple cares about money. China cares about the Middle Kingdom. Foxconn cares about what? Building plants in the US… soon?

Stephen E Arnold, November 28, 2022

Cyber Security? That Is a Good Question

November 25, 2022

This is not ideal. We learn from Yahoo Finance, “Russian Software Disguised as American Finds Its Way into U.S. Army, CDC Apps.” Reuters journalists James Pearson and Marisa Taylor report:

“Thousands of smartphone applications in Apple and Google’s online stores contain computer code developed by a technology company, Pushwoosh, that presents itself as based in the United States, but is actually Russian, Reuters has found. The Centers for Disease Control and Prevention (CDC), the United States’ main agency for fighting major health threats, said it had been deceived into believing Pushwoosh was based in the U.S. capital. After learning about its Russian roots from Reuters, it removed Pushwoosh software from seven public-facing apps, citing security concerns. The U.S. Army said it had removed an app containing Pushwoosh code in March because of the same concerns. That app was used by soldiers at one of the country’s main combat training bases. According to company documents publicly filed in Russia and reviewed by Reuters, Pushwoosh is headquartered in the Siberian town of Novosibirsk, where it is registered as a software company that also carries out data processing. … Pushwoosh is registered with the Russian government to pay taxes in Russia. On social media and in U.S. regulatory filings, however, it presents itself as a U.S. company, based at various times in California, Maryland and Washington, D.C., Reuters found.”

Pushwoosh’s software was included in the CDC’s main app and that share information on health concerns, including STDs. The Army had used the software in an information portal at, perhaps among other places, its National Training Center in California. Any data breach there could potentially reveal upcoming troop movements. Great. To be clear, there is no evidence data has been compromised. However, we do know Russia has a pesky habit of seizing any data it fancies from companies based within its borders.

Other entities apparently duped by Pushwoosh include the NRA, Britain’s Labor Party, large companies like Unilever, and makers of many items on Apple’s and Google’s app stores. The article includes details on how the company made it look like it was based in the US and states the FTC has the authority to prosecute those who engage in such deceptive practices. Whether it plans to bring charges is yet to be seen.

Cynthia Murrell, November 25, 2022

Are Governments Behaving Like Sheep?

November 24, 2022

North Korea, China, and possibly Russia are incarnates of Orwell’s Big Brother from the dystopian 1984 novel. The US government is compared to Big Brother (and rightly so) when it attempts to block free speech. The thing about outlawing free speech is that it takes too much energy to regulate. The US government wants to limit free speech, but only when it feels like it. We also do not want that, because the government lies. Gizmodo explains why we do not want the government to be Big Brother in: “You Really Don’t Want The Government To Be Your Content Moderator.”

The Department of Homeland Security is collaborating with tech firms and large businesses to repackage Bush’s “War on Terror” into a new product. They are building tools to monitor social media and combat disinformation. Why did this happen?

“In April, the Biden administration announced the launch of a Disinformation Governance Board, a new unit within DHS meant to “standardize the [government’s] treatment of disinformation” across various agencies. But the project was fumbled from the start: the unit initially failed to release a charter, leaving Americans to wonder just what exactly this shadowy new group with a creepy name was going to be doing. It didn’t take long for critics—on both the political left and right—to start referring to it as a “Ministry of Truth,” (the notorious propaganda bureau from George Orwell’s 1984). Though officials tried to salvage the effort. DHS shuttered the board in May after it had been operational for less than a month.”

Biden’s administration continued the Orwellian acts with a new organization: Cybersecurity and Infrastructure Security (CISA). Big businesses such as JPMorgan Chase and Twitter are working with the FBI and CISA to approach state-sponsored disinformation campaigns. The US government also wants to address COVID-19 vaccine efficacy, US support of Ukraine, Afghanistan withdrawal, and racial justice.

Is the US government is not an impartial entity despite what politicians claim?

Whitney Grace, November 24, 2022

From Our Pipe Dream Department: Harmful AI Must Pay Victims!

October 28, 2022

It looks like the European Commission is taking the potential for algorithms to cause harm seriously. The Register reports, “Europe Just Might Make it Easier for People to Sue for Damage Caused by AI Tech.”  Vice-president for values and transparency V?ra Jourová frames the measure as a way to foster trust in AI technologies. Apparently EU officials believe technical innovation is helped when the public knows appropriate guardrails are in place. What an interesting perspective. Writer Katyanna Quach describes:

“The proposed AI Liability Directive aims to do a few things. One main goal is updating product liability laws so that they effectively cover machine-learning systems and lower the burden-of-proof for a compensation claimant. This ought to make it easier for people to claim compensation, provided they can prove damage was done and that it’s likely a trained model was to blame. This means someone could, for instance, claim compensation if they believe they’ve been discriminated against by AI-powered recruitment software. The directive opens the door to claims for compensation following privacy blunders and damage caused by poor safety in the context of an AI system gone wrong. Another main aim is to give people the right to demand from organizations details of their use of artificial intelligence to aid compensation claims. That said, businesses can provide proof that no harm was done by an AI and can argue against giving away sensitive information, such as trade secrets. The directive is also supposed to give companies a clear understanding and guarantee of what the rules around AI liability are.”

Officials hope such clarity will encourage developers to move forward with AI technologies without the fear of being blindsided by unforeseen allegations. Another goal is to build the current patchwork of AI standards and legislation across Europe into a cohesive set of rules. Commissioner for Justice Didier Reynders declares citizen protection top priority, stating, “technologies like drones or delivery services operated by AI can only work when consumers feel safe and protected.” Really? I’d like to see US officials tell that to Amazon.

Cynthia Murrell, October 28, 2022

Forty Firms and the European Union Demonstrate Their Failure to Be Googley

October 20, 2022

In the last 25 years, an estimated 300,000 people have worked for the Google as FTEs (full time equivalents). My view is that the current crop of European Union officials as well as the senior managers of several dozen online shopping services firms are not Google-grade human resources. Why do I make this distinction between those who are Googley and those who do not make the grade. Being Googley is not like the French Foreign Legion. In that organization, a wanna be Legionnaire must do push ups, master the lingo of the new homeland, and be ready to die for France. At Google one must be clever, have the “right stuff” intellectually, and be adept at solving problems, playing with a mobile phone, and manipulating a Mac simultaneously. Being Googley means understanding the ethos of the forever young online search system. That system, as I understand it, accepts the constraints of the GoTo, Overture, and Yahoo online advertising concept. Furthermore that system accepts that Oingo became a key component in matching advertising to user interests. Thus, to be Googley means that smart software, minimal interaction with humanoids who are by definition are “not Googley,” and reliance on charging for entering and leaving a digital saloon. Extra cash is required to enjoy the for fee options in the establishment; for example, a mouse pad with the Google logo silkscreened thereupon.

I read “EU Companies Claim Google Still Abusing its Shopping Power.” The article explains that a number of companies believe the Google is not behaving in a warm, friendly, collegial manner. Are these firms’ allegations correct?

I don’t know.

What I do know is that the companies signing the letter to EU regulators are demonstrating to me that these organizations are not Googley. What does this mean? May I hypothesize about the implications of lacking an understanding of the Alphabet Google YouTube DeepMind organization might be? Of course, you say. So here are my thoughts:

  1. None of the signatories nor the employees of these signatories will receive first-class tchotchkes at conferences at which AGYD has a booth or stand equipped with freebies
  2. None of the signatories, their employees, nor their progeny will be hired by AGYD due to manifest non Googliness. (Remember, please, that Googliness is next to Godliness. Who else can solve death?)
  3. None of the Web sites, online properties, or content objects will be made findable by the “black box” operated by closely guarded algorithms and informed by the superior methods of the smart software. (I must admit I find the idea “if you are findable in Google, you do not exist) a most existentially well formed idea.
  4. None of the elected officials involved in fining, criticizing, or demonstrating non Googliness will be supported in their re-election efforts by the GOOG’s powerful systems.

AGYD is not a company. It is a digital country. It handles more than 90 percent of the search queries in North America, South America, and Western Europe. YouTube is television for those in Eastern Europe. The Google is bigger than the definitely Google challenged in Europe. Perhaps thinking about the downsides of not being Googley is a useful activity? Just a thought. But it may be too late for the 40 outfits signing a letter attesting to their failure in the Google Comprehension Examination.

Stephen E Arnold, October 20, 2022

Why Is Google on the Hot Seat in India? Does the Indian Government Understand Being Googley?

October 19, 2022

The Competition Commission of India (CCI) is piling up allegations against Google faster than they can be resolved. The Indian Express reports, “CCI Orders Another Probe Against Google.” At issue is the company’s allegedly self-serving terms for news organizations. Gee, what do India’s regulators sense that US regulators seem to overlook? (And Europe‘s, for that matter.) We learn:

“The News Broadcasters & Digital Association had alleged that its members are forced to provide their news content to Google in order to prioritize their weblinks in the Search Engine Result Page (SERP) of Google. As a result, Google free rides on the content of the members without giving them adequate compensation, as per the complaint. Among others, it was alleged that Google exploited the dependency of the members on the search engine offered by Google for referral-traffic to build services such as Google News, Google Discover and Google Accelerated Mobile Pages (AMP). The search engine major provides news content to user through Google Search and through news aggregator vertical, Google News. According to the complaint, in Google Search, users can either search directly for news through News Tab or receive news through result in SERPs. Google incorporated news content in its SERPs through featured snippets including ‘Top Stories’ carousels. However, the revenue distributed by Google to news publishers doesn’t compensate for the real contribution made by the association’s members to these platforms, it added.”

The latest probe is being consolidated with two similar ones already in progress. Those claimants are the Digital News Publishers Association and the Indian Newspaper Society. How many more plaintiffs will join the fray before the combined investigation concludes? More importantly, will any penalties be imposed that can even scratch the tech powerhouse?

Cynthia Murrell, October 19, 2022

Proposed EU Rule Would Allow Citizens to Seek Restitution for Harmful AI

October 10, 2022

It looks like the European Commission is taking the potential for algorithms to cause harm seriously. The Register reports, “Europe Just Might Make it Easier for People to Sue for Damage Caused by AI Tech.”  Vice-president for values and transparency V?ra Jourová frames the measure as a way to foster trust in AI technologies. Apparently EU officials believe technical innovation is helped when the public knows appropriate guardrails are in place. What an interesting perspective. Writer Katyanna Quach describes:

“The proposed AI Liability Directive aims to do a few things. One main goal is updating product liability laws so that they effectively cover machine-learning systems and lower the burden-of-proof for a compensation claimant. This ought to make it easier for people to claim compensation, provided they can prove damage was done and that it’s likely a trained model was to blame. This means someone could, for instance, claim compensation if they believe they’ve been discriminated against by AI-powered recruitment software. The directive opens the door to claims for compensation following privacy blunders and damage caused by poor safety in the context of an AI system gone wrong. Another main aim is to give people the right to demand from organizations details of their use of artificial intelligence to aid compensation claims. That said, businesses can provide proof that no harm was done by an AI and can argue against giving away sensitive information, such as trade secrets. The directive is also supposed to give companies a clear understanding and guarantee of what the rules around AI liability are.”

Officials hope such clarity will encourage developers to move forward with AI technologies without the fear of being blindsided by unforeseen allegations. Another goal is to build the current patchwork of AI standards and legislation across Europe into a cohesive set of rules. Commissioner for Justice Didier Reynders declares citizen protection top priority, stating, “technologies like drones or delivery services operated by AI can only work when consumers feel safe and protected.” Really? I’d like to see US officials tell that to Amazon.

Cynthia Murrell, October 10, 2022

The FCC Springs into Action Regarding IS Ps

October 10, 2022

The easiest way to describe the COVID-19 pandemic is that it sucked. People died, paychecks were cut, pandemic pets were returned to shelters, and now no one wants to leave their houses. An even worse side effect is that bad actors filed false claims with US government offices to receive relief funds. Light Reading explains how bad actors took advantage of of the Affordable Connectivity Program (ACP): “FCC Inspector General Says ‘Dozens’ Of ISPs Claimed Fraudulent ACP Funds.”

The FCC Office of Inspector General (OIG) noted that a dozen broadband providers (ISPs) fraudulently enrolled for ACPs. The bad-acting ISPs enrolled a single individual multiple times for ACP reimbursements amounting to thousands of dollars. The scams were conducted in Texas, Ohio, Alabama, and Oklahoma; the latter turned out to be the worst offender. ISPs are responsible for ensuring which households are eligible under the ACP rules. The government is cracking down on fraud:

“Following the release of the report, the Wireline Competition Bureau published a public notice outlining new steps it’s implementing to “limit opportunities for waste, fraud, and abuse” with the ACP. As per the notice, the Universal Service Administrative Company (USAC) is improving the measures it uses to verify BQPs [benefit qualifying person] as well as instituting processes to hold payments and de-enroll households that used the same BQP.”

A total of $14.2 billion was allotted to the ACP, but less than half of that amount is reaching those who truly need the assistance:

“According to an ACP dashboard from the Institute for Local Self-Reliance (ILSR) and Community Networks, just 13 million qualifying households in the US are enrolled out of an eligible 37 million, or 36.5% of the population. That includes 40% of eligible Oklahomans, where the FCC report cites its most egregious example of ACP fraud.”

ACP funding is predicted to run out by March 2025 and if enrollment is boosted by 50% that money is gone by April 2024.

The ACP needs more funds and needs to weed out fraudulent claims. At the rate the US government acts, it is going to take a long time to do either. It will probably be faster than Mexico’s, France’s, and India’s notoriously slow governments.

Whitney Grace, October 10, 2022

ISPs: The Tension Is Not Resolved

October 7, 2022

The deck is stacked against individual consumers, but sometimes the law favors them such as in a recent case in Maine. The Associate Press shared the good news in the story, “Internet Service Providers Drop Challenge Of Privacy Laws.” Maine has one of the strictest Internet privacy laws and it prevents service providers from using, selling, disclosing, or providing access to consumers’ personal information without their consent.

Industry associations and corporations armed with huge budgets and savvy lawyers sued the state claiming the law violated their First Amendment rights. A judge rejected the lawsuit, protecting the little guy. The industry associations agreed to pay $55,000 the state accrued protecting the law. The ACLU helped out as well:

“Supporters of Maine’s law include the ACLU of Maine, which filed court papers in the case in favor of keeping the law on the books. The ACLU said in court papers that the law was ‘narrowly drawn to directly advance Maine’s substantial interests in protecting consumers’ privacy, freedom of expression, and security.’

Democratic Gov. Janet Mills has also defended the law as “common sense.”

Maine is also the home of another privacy law that regulates the use of facial recognition technology. That law, which came on the books last year, has also been cited as the strictest of its kind in the U.S.

This is yet another example of corporate America thinking about profits over consumer rights and protections. There is a drawback, however: locating criminals. Many modern criminal cases are solved with access to a criminal’s Internet data. Bad actors forgo their rights when they commit crimes, so they should not be protected by these laws. The unfortunate part is that some people disagree.

How about we use this reasoning: the average person is protected by everyone that participates in sex trafficking, pedophilia, and stealing tons of money are not protected by the law. The basic black and white text should do to the truck

Whitney Grace, October 7, 2022

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta