From the Land of Science Fiction: AI Is Alive
October 7, 2024
Those somewhat erratic podcasters at Windows Central published a “real” news story. I am a dinobaby, and I must confess: I am easily amused. The “real” news story in question is “Sam Altman Admits ChatGPT’s Advanced Voice Mode Tricked Him into Thinking AI Was a Real Person: “I Kind of Still Say ‘Please’ to ChatGPT, But in Voice Mode, I Couldn’t Use the Normal Niceties. I Was So Convinced, Like, Argh, It Might Be a Real Person.“
I call Sam Altman Mr. AI Man. He has been the A Number One sales professional pitching OpenAI’s smart software. As far as I know, that system is still software and demonstrating some predictable weirdnesses. Even though we have done a couple of successful start ups and worked on numerous advanced technology projects, few forgot at Halliburton that nuclear stuff could go bang. At Booz, Allen no one forgot a heads up display would improve mission success rates and save lives as well. At Ziff, no one forgot our next-generation subscription management system as software, not a diligent 21 year old from Queens. Therefore, I find it just plain crazy the Sam AI-Man has forgotten that software coded by people who continue to abandon the good ship OpenAI wrote software.
Another AI believer has formed a humanoid attachment to a machine and software. Perhaps the female computer scientist is representative of a rapidly increasing cohort of people who have some personality quirks. Thanks, MSFT Copilot. How are those updates to Windows going? About as expected, right.
Last time I checked, the software I have is not alive. I just pinged ChatGPT’s most recent confection and received the same old error to a query I run when I want to benchmark “improvements.” Nope. ChatGPT is not alive. It is software. It is stupid in a way only neural networks can be. Like the hapless Googler who got fired because he went public with his belief that Google’s smart software was alive, Sam AI-Man may want to consider his remarks.
Let’s look at how the esteemed Windows Central write up tells the quite PR-shaped, somewhat sad story. The write up says without much humor, satire, or critical thinking:
In a short clip shared on r/OpenAI’s subreddit on Reddit, Altman admits that ChatGPT’s Voice Mode was the first time he was tricked into thinking AI was a real person.
Ah, an output for the Reddit users. PR, right?
The canny folk at Windows Central report:
In a recent blog post by Sam Altman, Superintelligence might only be “a few thousand days away.” The CEO outlined an audacious plan to edge OpenAI closer to this vision of “$7 trillion and many years to build 36 semiconductor plants and additional data centers.”
Okay, a “few thousand.”
Then the payoff for the OpenAI outfit but not for the staff leaving the impressive electricity consuming OpenAI:
Coincidentally, OpenAI just closed its funding round, where it raised $6.6 from investors, including Microsoft and NVIDIA, pushing its market capitalization to $157 billion. Interestingly, the AI firm reportedly pleaded with investors for exclusive funding, leaving competitors like Former OpenAI Chief Scientist Illya Sustever’s SuperIntelligence Inc. and Elon Musk’s xAI to fend for themselves. However, investors are still confident that OpenAI is on the right trajectory to prosperity, potentially becoming the world’s dominant AI company worth trillions of dollars.
Nope, not coincidentally. The money is the payoff from a full court press for funds. Apple seems to have an aversion for sweaty, easily fooled sales professionals. But other outfits want buy into the Sam AI-Man vision. The dream the money people have are formed from piles of real money, no HMSTR coin for these optimists.
Several observations, whether you want ‘em or not:
- OpenAI is an outfit which has zoomed because of the Microsoft deal and announcement that OpenAI would be the Clippy for Windows and Azure. Without that “play,” OpenAI probably would have remained a peculiarly structure non-profit thinking about where to find a couple of bucks.
- The revenue-generating aspect of OpenAI is working. People are giving Sam AI-Man money. Other outfits with AI are not quite in OpenAI’s league and most may never be within shouting distance of the OpenAI PR megaphone. (Yep, that’s you folks, Windows Central.)
- Sam AI-Man may believe the software written by former employees is alive. Okay, Sam, that’s your perception. Mine is that OpenAI is zeros and ones with some quirks; namely, making stuff up just like a certain luminary in the AI universe.
Net net: I wonder if this was a story intended for the Onion and rejected because it was too wacky for Onion readers.
Stephen E Arnold, October 7, 2024
FOGINT: Ukraine Government Telegram Restrictions
October 7, 2024
The only smart software involved in producing this short FOGINT post was Microsoft Copilot’s estimable art generation tool. Why? It is offered at no cost.
“Ukrainian Parliament to Restrict Telegram Usage” reports that Telegram faces new restrictions due to security concerns. The news story says:
The Verkhovna Rada (the Ukrainian parliament) will introduce restrictions on the use of the Telegram messenger app for official purposes.
Mr. Durov’s willingness to cooperate with government requests for user information is not the primary reason for this set of restrictions on Ukrainian government staff use of Telegram. The write up points out:
These measures are justified by past incidents where third parties gained access to government employees’ data through Telegram or created fake accounts
What are the measures used by Ukrainian officials to discourage the use of Telegram? Among those in use are:
- No contact synchronization
- No official information transmitted on a Telegram channel
- No Telegram app on work computers, government-provided mobile phones, or personal devices used for government communication
- “Technical blocks” will be implemented to prevent Telegram usage.
Will these measures work? The answer, “To some degree.” However, the increase in interest in alternatives has created a mini-boom for the Simple X end-to-end encrypted application. Certain ultra leaning groups are moving to other secure messaging systems.
A person who looks a bit like Pavel Durov demonstrates his patented exercise: A sudden twist and back flip from a cell in a French prison. Thanks, MSFT Copilot, good enough like some many things in 2024.
The problem, however, is that Telegram has more than 900 million users and offers a number of user-centric features not available in other E2EE applications; for example, Telegram does not charge for data storage or bandwidth. The fix is to acquire a burner phone or use specialized services.
The interesting facet of this move is that it comes after Telegram’s decision to block certain Ukrainian produced content from distribution to Telegram users in Russia. Prior to Telegram’s surprising action Ukrainian government officials disseminated text content to Russians who were members of Ukrainian Telegram channels.
That action made clear that Telegram was demonstrating its flexibility. Pavel Durov then did a cirque de soleil vault with his fancy move to cooperate with legitimate requests for information from unnamed government authorities.
FOGINT thinks Mr. Durov is confident he stuck his landing for this trick and scored a 10. FOGINT scored Mr. Durov an imaginary number.
Stephen E Arnold, October 7, 2024
Why Present Bad Sites?
October 7, 2024
I read “Google Search Is Testing Blue Checkmark Feature That Helps Users Spot Genuine Websites.” I know this is a test, but I have a question: What’s genuine mean to Google and its smart software? I know that Google cannot answer this question without resorting to consulting nonsensicalness, but “genuine” is a word. I just don’t know what’s genuine to Google. Is a Web site that uses SEO trickery to appear in a results list? Is it a blog post written by a duplicitous PR person working at a large Google-type firm? Is it a PDF appearing on a “genuine” government’s Web site?
A programmer thinking about blue check marks. The obvious conclusion is to provide a free blue check mark. Then later one can charge for that sign of goodness. Thanks, Microsoft. Good enough. Just like that big Windows update. Good enough.
The write up reports:
Blue checkmarks have appeared next to certain websites on Google Search for some users. According to a report from The Verge, this is because Google is experimenting with a verification feature to let users know that sites aren’t fraudulent or scams.
Okay, what’s “fraudulent” and what’s a “scam”?
What does Google say? According to the write up:
A Google spokesperson confirmed the experiment, telling Mashable, “We regularly experiment with features that help shoppers identify trustworthy businesses online, and we are currently running a small experiment showing checkmarks next to certain businesses on Google.”
A couple of observations:
- Why not allow the user to NOT out these sites? Better yet, give the user a choice of seeing de-junked or fully junked sites? Wow, that’s too hard. Imagine. A Boolean operator.
- Why does Google bother to index these sites? Why not change the block list for the crawl? Wow, that’s too much work. Imagine a Googler editing a “do not crawl” list manually.
- Is Google admitting that it can identify problematic sites like those which push fake medications or the stolen software videos on YouTube? That’s pretty useful information for an attorney taking legal action against Google, isn’t it?
Net net: Google is unregulated and spouts baloney. Google needs to jack up its revenue. It has fines to pay and AI wizards to pay. Tough work.
Stephen E Arnold, October 7, 2024
A Modern Employee Wants Love, Support, and Compassion
October 5, 2024
This essay is the work of a dumb dinobaby. No smart software required.
Beyond Search is a “Wordpress” blog. I have followed with (to be honest) not much interest the dispute between a founder and a couple of organizations. WordPress has some widgets that one of the Beyond Search team “subscribes” to each year. These, based on my experience, are so-so. We have moved the blog to WordPress-friendly hosting services because [a] the service was not stable, [b] not speedy, and [c] not connected to any known communication service except Visa.
I read “I Stayed,” a blog post. The write up expresses a number of sentiments about WordPress, its employees, and its mission. (Who knew? A content management system with a “mission.” ) I noted this statement:
Listen, I’m struggling with medical debts and financial obligations incurred by the closing of my conference and publishing businesses.
I don’t know much about modern work practices, but this sentence suggests to me that a full-time employee was running two side gigs. Both of these failed, and the author of the post is in debt. I am a dinobaby, and I assumed that when a company hired me as a full time employee like Halliburton or Booz, Allen & Hamilton, my superiors expected me to focus on the tasks given to me by Halliburton and Booz, Allen & Hamilton. “Go to a uranium mine. Learn. Ask questions. Take photographs or ore processing,” so I went. No side gigs, no questions about breathing mine dust. Just do the work. Not now. The answer to a superior’s request apparently means, “Hey, you have spare time to pay attention to that conference and publishing business. No problemo.” Times have changed.
The write up includes this statement about not quitting or taking a buy out:
I stayed because I believe in the work we do. I believe in the open web and owning your own content. I’ve devoted nearly three decades of work to this cause, and when I chose to move in-house, I knew there was only one house that would suit me. In nearly six years at Automattic, I’ve been able to do work that mattered to me and helped others, and I know that the best is yet to come.
I think I am supposed to interpret this decision as noble or allegedly noble. My view is that WordPress professionals who remain on the job includes these elements:
- If you have a full-time job at a commercial or quasi-commercial enterprise, focus on the job. It would be great if WordPress fixed the wonky cursor movement in its editor. You know it really doesn’t work. In fact, it sucks on my machines both Mac and Windows.
- Think about the interface. Hiding frequently used functions is not helpful.
- Use words to make clear certain completely weird icons. Yep, actual words.
- Display explicate which are not confusing. I don’t find multiple uses of the word “Publish” particularly helpful.
To sum up: Suck it up, buttercup.
Stephen E Arnold, October 7, 2024
The Future of Copyright: AI + Bots = Surprise. Disappeared Mario Content.
October 4, 2024
This essay is the work of a dumb dinobaby. No smart software required.
Did famously litigious Nintendo hire “brand protection” firm Tracer to find and eliminate AI-made Mario mimics? According to The Verge, “An AI-Powered Copyright Tool Is Taking Down AI-Generated Mario Pictures.” We learn the tool went on a rampage through X, filing takedown notices for dozens of images featuring the beloved Nintendo character. Many of the images were generated by xAI’s Grok AI tool, which is remarkably cavalier about infringing (or offensive) content. But some seem to have been old-school fan art. (Whether noncommercial fan art is fair use or copyright violation continues to be debated.) Verge writer and editor Wes Davis reports:
“The company apparently used AI to identify the images and serve takedown notices on behalf of Nintendo, hitting AI-generated images as well as some fan art. The Verge’s Tom Warren received an X notice that some content from his account was removed following a Digital Millennium Copyright Act (DMCA) complaint issued by a ‘customer success manager’ at Tracer. Tracer offers AI-powered services to companies, purporting to identify trademark and copyright violations online. The image in question, shown above, was a Grok-generated picture of Mario smoking a cigarette and drinking an oddly steaming beer.”
Navigate to the post to see the referenced image, where the beer does indeed smoke but the ash-laden cigarette does not. Davis notes the rest of the posts are, of course, no longer available to analyze. However, some users have complained their original fan art was caught in the sweep. We learn:
“One of the accounts that was listed in the DMCA request, OtakuRockU, posted that they were warned their account could be terminated over ‘a drawing of Mario,’ while another, PoyoSilly, posted an edited version of a drawing they said was identified in a notice. (The new one had a picture of a vaguely Mario-resembling doll inserted over a part of the image, obscuring the original part containing Mario.)”
Since neither Nintendo nor Tracer responded to Davis’ request for comment, he could not confirm Tracer was acting at the game company’s request. He is not, however, ready to let the matter go: The post closes with a request for readers to contact him if they had a Mario image taken down, whether AI-generated or not. See the post for that contact information, if applicable.
Cynthia Murrell, October 4, 2024
Skills You Can Skip: Someone Is Pushing What Seems to Be Craziness
October 4, 2024
This essay is the work of a dumb dinobaby. No smart software required.
The Harvard ethics research scam has ended. The Stanford University president resigned over fake data late in 2023. A clump of students in an ethics class used smart software to write their first paper. Why not use smart software? Why not let AI or just dishonest professors make up data with the help of assorted tools like Excel and Photoshop? Yeah, why not?
A successful pundit and lecturer explains to his acolyte that learning to write is a waste of time. And what does the pundit lecture about? I think he was pitching his new book, which does not require that one learn to write. Logical? Absolutely. Thanks, MSFT Copilot. Good enough.
My answer to the question is: “Learning is fundamental.” No, I did not make that up, nor did I believe the information in “How AI Can Save You Time: Here Are 5 Skills You No Longer Need to Learn.” The write up has sources; it has quotes; and it has the type of information which is hard to believe assembled by humans who presumably have some education, maybe a college degree.
What are the five skills you no longer need to learn? Hang on:
- Writing
- Art design
- Data entry
- Data analysis
- Video editing.
The expert who generously shared his remarkable insights for the Euro News article is Bernard Marr, a futurist and internationally best-selling author. What did Mr. Marr author? He has written “Artificial Intelligence in Practice: How 50 Successful Companies Used Artificial Intelligence To Solve Problems,” “Key Performance Indicators For Dummies,” and “The Intelligence Revolution: Transforming Your Business With AI.”
One question: If writing is a skill one does not need to learn, why does Mr. Marr write books?
I wonder if Mr. Marr relies on AI to help him write his books. He seems prolific because Amazon reports that he has outputted more than a dozen, maybe more. But volume does not explain the tension between Mr. Marr’s “writing” (which may be outputting) versus the suggestion that one does not need to learn or develop the skill of writing.
The cited article quotes the prolific Mr. Marr as saying:
“People often get scared when you think about all the capabilities that AI now have. So what does it mean for my job as someone that writes, for example, will this mean that in the future tools like ChatGPT will write all our articles? And the answer is no. But what it will do is it will augment our jobs.”
Yep, Mr. Marr’s job is outputting. You don’t need to learn writing. Smart software will augment one’s job.
My conclusion is that the five identified areas are plucked from a listicle, either generated by a human or an AI system. Euro News was impressed with Mr. Marr’s laser-bright insight about smart software. Will I purchase and learn from Mr. Marr’s “Generative AI in Practice: 100+ Amazing Ways Generative Artificial Intelligence is Changing Business and Society.”
Nope.
Stephen E Arnold, October 4, 2024
META and Another PR Content Marketing Play
October 4, 2024
This write up is the work of a dinobaby. No smart software required.
I worked through a 3,400 word interview in the orange newspaper. “Alice Newton-Rex: WhatsApp Makes People Feel Confident to Be Themselves: The Messaging Platform’s Director of Product Discusses Privacy Issues, AI and New Features for the App’s 2bn Users” contains a number of interesting statements. The write up is behind the Financial Times’s paywall, but it is worth subscribing if you are monitoring what Meta (the Zuck) is planning to do with regard to E2EE or end-to-end encrypted messaging. I want to pull out four statements from the WhatsApp professional. My approach will be to present the Meta statements and then pose one question which I thought the interviewer should have asked. After the quotes, I will offer a few observations, primarily focusing on Meta’s apparent “me too” approach to innovation. Telegram’s feature cadence appears to be two to four ahead of Meta’s own efforts.
A WhatsApp user is throwing big, soft, fluffy snowballs at the company. Everyone is impressed. Thanks, MSFT Copilot. Good enough.
Okay, let’s look at the quotes which I will color blue. My questions will be in black.
Meta Statement 1: The value of end-to-end encryption.
We think that end-to-end encryption is one of the best technologies for keeping people safe online. It makes people feel confident to be themselves, just like they would in a real-life conversation.
What data does Meta have to back up this “we think” assertion?
Meta Statement 2: Privacy
Privacy has always been at the core of WhatsApp. We have tons of other features that ensure people’s privacy, like disappearing messages, which we launched a few years ago. There’s also chat lock, which enables you to hide any particular conversation behind a PIN so it doesn’t appear in your main chat list.
Always? (That means that privacy is the foundation of WhatsApp in a categorically affirmative way.) What do you mean by “always”?
Meta Statement 3:
… we work to prevent abuse on WhatsApp. There are three main ways that we do this. The first is to design the product up front to prevent abuse, by limiting your ability to discover new people on WhatsApp and limiting the possibility of going viral. Second, we use the signals we have to detect abuse and ban bad accounts — scammers, spammers or fake ones. And last, we work with third parties, like law enforcement or fact-checkers, on misinformation to make sure that the app is healthy.
What data can you present to back up these statements about what Meta does to prevent abuse?
Meta Statement 4:
if we are forced under the Online Safety Act to break encryption, we wouldn’t be willing to do it — and that continues to be our position.
Is this position tenable in light of France’s action against Pavel Durov, the founder of Telegram, and the financial and legal penalties nation states can are are imposing on Meta?
Observations:
- Just like Mr. Zuck’s cosmetic and physical make over, these statements describe a WhatsApp which is out of step with the firm’s historical behavior.
- The changes in WhatsApp appear to be emulation of some Telegram innovations but with a two to three year time lag. I wonder if Meta views Telegram as a live test of certain features and functions.
- The responsiveness of Meta to lawful requests has, based on what I have heard from my limited number of contacts, has been underwhelming. Cooperation is something in which Meta requires some additional investment and incentivization of Meta employees interacting with government personnel.
Net net: A fairly high profile PR and content marketing play. FT is into kid glove leather interviews and throwing big soft Nerf balls, it seems.
Stephen E Arnold, October 4, 2024
SolarWinds Outputs Information: Does Anyone Other Than Microsoft and the US Government Remember?
October 3, 2024
I love these dribs and drops of information about security issues. From the maelstrom of emails, meeting notes, and SMS messages only glimpses of what’s going on when a security misstep takes place. That’s why the write up “SolarWinds Security Chief Calls for tighter Cyber Laws” is interesting to me. How many lawyer-type discussions were held before the Solar Winds’ professional spoke with a “real” news person from the somewhat odd orange newspaper. (The Financial Times used to give these things away in front of their building some years back. Yep, the orange newspaper caught some people’s eye in meetings which I attended.)
The subject of the interview was a person who is/was the chief information security officer at SolarWinds. He was on duty with the tiny misstep took place. I will leave it to you to determine whether the CrowdStrike misstep or the SolarWinds misstep was of more consequence. Neither affected me because I am a dinobaby in rural Kentucky running steam powered computers from my next generation office in a hollow.
A dinobaby is working on a blog post in rural Kentucky. This talented and attractive individual was not affected by either the SolarWinds or the CrowdStrike security misstep. A few others were not quite so fortunate. But, hey, who remembers or cares? Thanks, Microsoft Copilot. I look exactly like this. Or close enough.
Here are three statements from the article in the orange newspaper I noted:
First, I learned that:
… cyber regulations are still ‘in flux’ which ‘absolutely adds stress across the globe’ on cyber chiefs.
I am delighted to learn that those working in cyber security experience stress. I wonder, however, what about the individuals and organizations who must think about the consequences of having their systems breached. These folks pay to be secure, I believe. When that security fails, will the affected individuals worry about the “stress” on those who were supposed to prevent a minor security misstep? I know I sure worry about these experts.
Second, how about this observation by the SolarWinds’ cyber security professional?
When you don’t have rules to follow, it’s very hard to follow them,” said Brown [the cyber security leader at SolarWinds]. “Very few security people would ever do something that wasn’t right, but you just have to tell us what’s right in order to do it,” he added.
Let’s think about this statement. To be a senior cyber security professional one has to be trained, have some cyber security certifications, and maybe some specialized in-service instruction at conferences or specific training events. Therefore, those who attend these events allegedly “learn” what rules to follow; for instance, make systems secure, conduct routine stress tests, have third party firms conduct security audits, validate the code, widgets, and APIs one uses, etc., etc. Is it realistic to assume that an elected official knows anything about security systems at a cyber security firm? As a dinobaby, my view is that these cyber wizards need to do their jobs and not wait for non-experts to give them “rules.” Make the systems secure via real work, not chatting at conferences or drinking coffee in a conference room.
And, finally, here’s another item I circled in the orange newspaper:
Brown this month joined the advisory board of Israeli crisis management firm Cytactic but said he was still committed to staying in his role at SolarWinds. “As far as the incident at SolarWinds: It happened on my watch. Was I ultimately responsible? Well, no, but it happened on my watch and I want to get it right,” he said.
Wasn’t Israel the country caught flat footed in October 2023? How does a company in Israel — presumably with staff familiar with the tools and technologies used to alert Israel of hostile actions — learn from another security professional caught flatfooted? I know this is an easily dismissed question, but for a dinobaby, doesn’t one want to learn from a person who gets things right? As I said, I am old fashioned, old, and working in a log cabin on a steam powered computing device.
The reality is that egregious security breaches have taken place. The companies and their staff are responsible. Are there consequences? I am not so sure. That means the present “tell us the rules” attitude will persist. Factoid: Government regulations in the US are years behind what clever companies and their executives do. No gap closing, sorry.
Stephen E Arnold, October 3, 2024
Russian Crypto Operation: An Endgame
October 3, 2024
This essay is the work of a dumb dinobaby. No smart software required.
The US Department of the Treasury took action to terminate “PM2BTC—a Russian virtual currency exchanger associated with Russian individual Sergey Sergeevich Ivanov (Ivanov)—as being of “primary money laundering concern” in connection with Russian illicit finance.” The DOT’s news release about the multi-national action is located at this link. Fogint has compiled a list of details about this action.
The write up says:
Today, the U.S. Department of the Treasury is undertaking actions as part of a coordinated international effort to disrupt Russian cybercrime services. Treasury’s Financial Crimes Enforcement Network (FinCEN) is issuing an order that identifies PM2BTC—a Russian virtual currency exchanger associated with Russian individual Sergey Sergeevich Ivanov (Ivanov)—as being of “primary money laundering concern” in connection with Russian illicit finance. Concurrently, the Office of Foreign Assets Control (OFAC) is sanctioning Ivanov and Cryptex—a virtual currency exchange registered in St. Vincent and the Grenadines and operating in Russia. The FinCEN and OFAC actions are being issued in conjunction with actions by other U.S. government agencies and international law enforcement partners to hold accountable Ivanov and the associated virtual currency services.
Here’s a selection of the items which may be of interest to cyber crime analysts and those who follow crypto activity.
- Two individuals were added to the sanctions list: Sergey Ivanov and Timur Shakhmametov. A reward or bounty has been offered for information leading to the arrest of these individuals. The payment could exceed US$9 million
- The PM2BTC and Cryptex entities has worked or been associated with other crypto entities; possibly Guarantex, UAPS, Cryptex, Hydra, FerumShop, Bitzlato, and an underground payment processing service known as Bitzlato
- Among the entities working on this operation (Endgame) were Europol, Germany, Great Britain, Latvia, Netherlands, and the US
- In 2014, the two persons of interest want to set up an automated (smart) service and may have been working with PerfectMoney and Paymer
- The activities of Messrs. Ivanov and Shakhmametov involved “carding” and other bank-related fraud
Russian regulations provide wiggle room for certain types of financial activity not permitted in the US and countries associated with this take down.
Several observations:
- The operation was large, possibly exceeding billions in illegal transactions
- The network of partners and affiliated firms illustrates the appeal of illegal crypto services
- One method of communication used by PM2BTC was Telegram Messenger.
- “The $9 Million US reward / bounty for those two Russian crypto exchange operators wanted by US DOJ is a game changer due to the enormous reward,” Sean Brizendine, blockchain researcher told the FOGINT team.
Additional information may become available as the case moves forward in the US and Europe. FOGINT will monitor public information which appears in Russia and other countries.
Stephen E Arnold, October 3, 2024
Big Companies: Bad Guys
October 3, 2024
Just in time for the UN Summit, the International Trade Union Confederation is calling out large corporations. The Guardian reports, “Amazon, Tesla, and Meta Among World’s Top Companies Undermining Democracy—Report.” Writer Michael Sainato tells us:
“Some of the world’s largest companies have been accused of undermining democracy across the world by financially backing far-right political movements, funding and exacerbating the climate crisis, and violating trade union rights and human rights in a report published on Monday by the International Trade Union Confederation (ITUC). Amazon, Tesla, Meta, ExxonMobil, Blackstone, Vanguard and Glencore are the corporations included in the report. The companies’ lobbying arms are attempting to shape global policy at the United Nations Summit of the Future in New York City on 22 and 23 September.”
The write-up shares a few of the report’s key criticisms. It denounces Amazon, for example, for practices from union busting and low wages to sky-high carbon emissions and tax evasion. Tesla, the ITUC charges, commits human rights violations while its majority shareholder Elon Musk loudly rails against democracy itself. And, the report continues, not only has Meta severely amplified far-right propaganda and groups around the world, it actively lobbies against data privacy laws. See the write-up for more examples.
The article concludes by telling us a little about the International Trade Union Confederation:
“The ITUC includes labor group affiliates from 169 nations and territories around the world representing 191 million workers, including the AFL-CIO, the largest federation of labor unions in the US, and the Trades Union Congress in the UK. With 4 billion people around the world set to participate in elections in 2024, the federation is pushing for an international binding treaty being worked on by the Open-ended intergovernmental working group to hold transnational corporations accountable under international human rights laws.”
Holding transnational corporations accountable—is that even possible? We shall see.
Cynthia Murrell, October 3, 2024

