AI and the Workplace: Change Will Happen, Just Not the Way Some Think
May 15, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
I read “AI and the Workplace.” The essay contains observations related to smart software in the workplace. The idea is that employees who are savvy will experiment and try to use the technology within today’s work framework. I think that will happen just as the essay suggests. However, I think there is a larger, more significant impact that is easy to miss. Looking at today’s workplace is missing a more significant impact. Employees either [a] want to keep their job, [b] gain new skills and get a better job, or [c] quit to vegetate or become an entrepreneur. I understand.
The data in the report make clear that some employees are what I call change flexible; that is, these motivated individuals differentiate from others at work by learning and experimenting. Note that more than half the people in the “we don’t use AI” categories want to use AI.
These data come from the cited article and an outfit called Asana.
The other data in the report. Some employees get a productivity boost; others just chug along, occasionally getting some benefit from AI. The future, therefore, requires learning, double checking outputs, and accepting that it is early days for smart software. This makes sense; however, it misses where the big change will come.
In my view, the major shift will appear in companies founded now that AI is more widely available. These organizations will be crafted to make optimal use of smart software from the day the new idea takes shape. A new news organization might look like Grok News (the Elon Musk project) or the much reviled AdVon. But even these outfits are anchored in the past. Grok News just substitutes smart software (which hopefully will not kill its users) for old work processes and outputs. AdVon was a “rip and replace” tool for Sports Illustrated. That did not go particularly well in my opinion.
The big job impact will be on new organizational set ups with AI baked in. The types of people working at these organizations will not be from the lower 98 percent of the work force pool. I think the majority of employees who once expected to work in information processing or knowledge work will be like a 58 year old brand manager at a vape company. Job offers will not be easy to get and new companies might opt for smart software and search engine optimization marketing. How many workers will that require? Maybe zero. Someone on Fiverr.com will do the job for a couple of hundred dollars a month.
In my view, new companies won’t need workers who are not in the top tier of some high value expertise. Who needs a consulting team when one bright person with knowledge of orchestrating smart software is able to do the work of a marketing department, a product design unit, and a strategic planning unit? In fact, there may not be any “employees” in the sense of workers at a warehouse or a consulting firm like Deloitte.
Several observations are warranted:
- Predicting downstream impacts of a technology unfamiliar to a great many people is tricky and sometimes impossible. Who knew social media would spawn a renaissance in getting tattooed?
- Visualizing how an AI-centric start up is assembled is a challenge? I submit it won’t look like an insurance company today. What’s a Tesla repair station look like? The answer, “Not much.”
- Figuring out how to be one of the elite who gets a job means being perceived as “smart.” Unlike Alina Habba, I know that I cannot fake “smart.” How many people will work hard to maximize the return on their intelligence? The answer, in my experience, is, “Not too many, dinobaby.”
Looking at the future from within the framework of today’s datasphere distorts how one perceives impact. I don’t know what the future looks like, but it will have some quite different configurations than the companies today have. The future will arrive slowly and then it becomes the foundation of a further evolution. What’s the grandson of tomorrow’s AI firm look like? Beauty will be in the eye of the beholder.
Net net: Where will the never-to-be-employed find something meaningful to do?
Stephen E Arnold, May 15, 2024
Encryption Battles Continue
May 15, 2024
Privacy protections are great—unless you are law-enforcement attempting to trace a bad actor. India has tried to make it easier to enforce its laws by forcing messaging apps to track each message back to its source. That is challenging for a platform with encryption baked in, as Rest of World reports in, “WhatsApp Gives India an Ultimatum on Encryption.” Writer Russell Brandom tells us:
“IT rules passed by India in 2021 require services like WhatsApp to maintain ‘traceability’ for all messages, allowing authorities to follow forwarded messages to the ‘first originator’ of the text. In a Delhi High Court proceeding last Thursday, WhatsApp said it would be forced to leave the country if the court required traceability, as doing so would mean breaking end-to-end encryption. It’s a common stance for encrypted chat services generally, and WhatsApp has made this threat before — most notably in a protracted legal fight in Brazil that resulted in intermittent bans. But as the Indian government expands its powers over online speech, the threat of a full-scale ban is closer than it’s been in years.”
And that could be a problem for a lot of people. We also learn:
“WhatsApp is used by more than half a billion people in India — not just as a chat app, but as a doctor’s office, a campaigning tool, and the backbone of countless small businesses and service jobs. There’s no clear competitor to fill its shoes, so if the app is shut down in India, much of the digital infrastructure of the nation would simply disappear. Being forced out of the country would be bad for WhatsApp, but it would be disastrous for everyday Indians.”
Yes, that sounds bad. For the Electronic Frontier Foundation, it gets worse: The civil liberties organization insists the regulation would violate privacy and free expression for all users, not just suspected criminals.
To be fair, WhatsApp has done a few things to limit harmful content. It has placed limits on message forwarding and has boosted its spam and disinformation reporting systems. Still, there is only so much it can do when enforcement relies on user reports. To do more would require violating the platform’s hallmark: its end-to-end encryption. Even if WhatsApp wins this round, Brandom notes, the issue is likely to come up again when and if the Bharatiya Janata Party does well in the current elections.
Cynthia Murrell, May 15, 2024
Blue-Chip Consulting Firm Needs Lawyers and Luck
May 15, 2024
McKinsey’s blue chip consultants continue their fancy dancing to explain away an itsy bitsy problem: ruined lives and run-of-the-mill deaths from drug overdoses. The International Business Times reminds us, “McKinsey Under Criminal Investigation Over Alleged Role in Fueling Opioid Epidemic.” The investigation, begun before the pandemic, continues to advance at the glacial pace of justice. Journalist Kiran Tom Sajan writes:
“Global consulting firm McKinsey & Company is under a criminal investigation by the U.S. attorneys’ offices in Massachusetts and the Western District of Virginia over its alleged involvement in fueling the opioid epidemic. The Federal prosecutors, along with the Justice Department’s civil division in Washington, are specifically examining whether the consulting firm participated in a criminal conspiracy by providing advice to Purdue Pharma and other pharmaceutical companies on marketing tactics aimed at increasing sales of prescription painkillers. Purdue is the manufacturer of OxyContin, one of the painkillers that allegedly contributed to widespread addiction and fatal overdoses. Since 2021, McKinsey has reached settlements of approximately $1 billion to resolve investigations and legal actions into its collaboration with opioid manufacturers, primarily Purdue. The company allegedly advised Purdue to intensify its marketing of the drug amid the opioid epidemic, which has resulted in the deaths of hundreds of thousands of Americans. McKinsey has not admitted any wrongdoing.”
Of course not. We learn McKinsey raked in about $86 million working for Purdue, most of it since the drug firm’s 2007 guilty plea. Sajan notes the investigations do not stop with the question of fueling the epidemic: The Justice Department is also considering whether McKinsey obstructed justice when it fired two incautious partners—they were caught communicating about the destruction of related documents. It is also examining whether the firm engaged in healthcare fraud when it helped Purdue and other opioid sellers make fraudulent Medicare claims. Will McKinsey’s recent settlement with insurance companies lend fuel to that dumpster fire? Will Lady Luck kick her opioid addiction and embrace those McKinsey professionals? Maybe.
Cynthia Murrell, May 15, 2024
Google Lessons in Management: Motivating Some, Demotivating Others
May 14, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
I spotted an interesting comment in “Google Workers Complain of Decline in Morale’ as CEO Sundar Pichai Grilled over Raises, Layoffs: Increased distrust.” Here’s the passage:
Last month, the company fired 200 more workers, aside from the roughly 50 staffers involved in the protests, and shifted jobs abroad to Mexico and India.
I paired this Xhitter item with “Google Employees Question Execs over Decline in Morale after Blowout Earnings.” That write up asserted:
At an all-hands meeting last week, Google employees questioned leadership about cost cuts, layoffs and “morale” issues following the company’s better-than-expected first-quarter earnings report. CEO Sundar Pichai and CFO Ruth Porat said the company will likely have fewer layoffs in the second half of 2024.
Poor, poor Googzilla. I think the fearsome alleged monopolist could lose a few pounds. What do you think? Thanks, MSFT Copilot good enough work just like some security models we know and love.
Not no layoffs. Just “fewer layoffs.” Okay, that a motivator.
The estimable “real” news service stated:
Alphabet’s top leadership has been on the defensive for the past few years, as vocal staffers have railed about post-pandemic return-to-office mandates, the company’s cloud contracts with the military, fewer perks and an extended stretch of layoffs — totaling more than 12,000 last year — along with other cost cuts that began when the economy turned in 2022. Employees have also complained about a lack of trust and demands that they work on tighter deadlines with fewer resources and diminished opportunities for internal advancement.
What’s wrong with this management method? The answer: Absolutely nothing. The write up included this bit of information:
She [Ruth Porat, Google CFO, who is quitting the volleyball and foosball facility] also took the rare step of admitting to leadership’s mistakes in its prior handling of investments. “The problem is a couple of years ago — two years ago, to be precise — we actually got that upside down and expenses started growing faster than revenues,” said Porat, who announced nearly a year ago [in 2023] that she would be stepping down from the CFO position but hasn’t yet vacated the office. “The problem with that is it’s not sustainable.”
Ever tactful, Sundar Pichai (the straight man in the Sundar & Prabhakar Comedy Team is quoted as saying in silky tones:
“I think you almost set the record for the longest TGIF answer,” he said. Google all-hands meetings were originally called TGIFs because they took place on Fridays, but now they can occur on other days of the week. Pichai then joked that leadership should hold a “Finance 101” Ted Talk for employees. With respect to the decline in morale brought up by employees, Pichai said “leadership has a lot of responsibility here, adding that “it’s an iterative process.”
That’s a demonstration of tactful high school science club management-speak, in my opinion. To emphasize the future opportunities for the world’s smartest people, he allegedly said, according to the write up:
Pichai said the company is “working through a long period of transition as a company” which includes cutting expenses and “driving efficiencies.” Regarding the latter point, he said, “We want to do this forever.” [Editor note: Emphasis added]
Forever is a long, long time, isn’t it?
Poor, addled Googzilla. Litigation to the left, litigation to the right. Grousing world’s smartest employees. A legacy of baby making in the legal department. Apple apparently falling in lust with OpenAI. Amazon and its pesky Yellow Pages approach to advertising.
The sky is not falling, but there are some dark clouds overhead. And, speaking of overhead, is Google ever going to be able to control its costs, pay off its technical debt, write checks to the governments when the firm is unjustly found guilty of assorted transgressions?
For now, yes. Forever? Sure, why not?
Stephen E Arnold, May 14, 2024
AdVon: Why So Much Traction and Angst?
May 14, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
AdVon. AdVon. AdVon. Okay, the company is in the news. Consider this write up: “Meet AdVon, the AI-Powered Content Monster Infecting the Media Industry.” So why meet AdVon? The subtitle explains:
Remember that AI company behind Sports Illustrated’s fake writers? We did some digging — and it’s got tendrils into other surprisingly prominent publications.
Let’s consider the question: Why is AdVon getting traction among “prominent publications” or any other outfit wanting content? The answer is not far to see: Cutting costs, doing more with less, get more clicks, get more money. This is not a multiple choice test in a junior college business class. This is common sense. Smart software makes it possible for those with some skill in the alleged art of prompt crafting and automation to sell “stories” to publishers for less than those publishers can produce the stories themselves.
The future continues to arrive. Here’s smart software is saying “Hasta la vista” to the human information generator. The humanoid looks very sad. The AI software nor its owner does not care. Revenue and profit are more important as long as the top dogs get paid big bucks. Thanks, MSFT Copilot. Working on your security systems or polishing the AI today?
Let’s look at the cited article’s peregrination to the obvious: AI can reduce costs of “publishing”. Plus, as AI gets more refined, the publications themselves can be replaced with scripts.
The write up says:
Basically, AdVon engages in what Google calls “site reputation abuse”: it strikes deals with publishers in which it provides huge numbers of extremely low-quality product reviews — often for surprisingly prominent publications — intended to pull in traffic from people Googling things like “best ab roller.” The idea seems to be that these visitors will be fooled into thinking the recommendations were made by the publication’s actual journalists and click one of the articles’ affiliate links, kicking back a little money if they make a purchase. It’s a practice that blurs the line between journalism and advertising to the breaking point, makes the web worse for everybody, and renders basic questions like “is this writer a real person?” fuzzier and fuzzier.
Okay. So what?
In spite of the article being labeled as “AI” in AdVon’s CMS, the Outside Inc spokesperson said the company had no knowledge of the use of AI by AdVon — seemingly contradicting AdVon’s claim that automation was only used with publishers’ knowledge.
Okay, corner cutting as part of AdVon’s business model. What about the “minimum viable product” or “good enough” approach to everything from self driving auto baloney to Boeing air craft doors? AI use is somehow exempt from what is the current business practice. Major academic figures take short cuts. Now an outfit with some AI skills is supposed to operate like a hybrid of Joan of Arc and Mother Theresa? Sure.
The write up states:
In fact, it seems that many products only appear in AdVon’s reviews in the first place because their sellers paid AdVon for the publicity. That’s because the founding duo behind AdVon, CEO Ben Faw and president Eric Spurling, also quietly operate another company called SellerRocket, which charges the sellers of Amazon products for coverage in the same publications where AdVon publishes product reviews.
To me, AdVon is using a variant of the Google type of online advertising concept. The bar room door swings both ways. The customer pays to enter and the customer pays to leave. Am I surprised? Nope. Should anyone? How about a government consumer protection watch dog. Tip: Don’t hold your breath. New York City tested a chatbot that provided information that violated city laws.
The write up concludes:
At its worst, AI lets unscrupulous profiteers pollute the internet with low-quality work produced at unprecedented scale. It’s a phenomenon which — if platforms like Google and Facebook can’t figure out how to separate the wheat from the chaff — threatens to flood the whole web in an unstoppable deluge of spam. In other words, it’s not surprising to see a company like AdVon turn to AI as a mechanism to churn out lousy content while cutting loose actual writers. But watching trusted publications help distribute that chum is a unique tragedy of the AI era.
The kicker is that the company owning the publication “exposing” AdVon used AdVon.
Let me offer several observations:
- The research reveals what will become an increasingly wide spread business practice. But the practice of using AI to generate baloney and spam variants is not the future. It is now.
- The demand for what appears to be old fashioned information generation is high. The cost of producing this type of information is going to force those who want to generate information to take short cuts. (How do I know? How about the president of Stanford University who took short cuts. That’s how. When a university president muddles forward for years and gets caught by accident, what are students learning? My answer: Cheat better than that.)
- AI diffusion is like gerbils. First, you have a couple of cute gerbils in your room. As a nine year old, you think those gerbils are cute. Then you have more gerbils. What do you do? You get rid of the gerbils in your house. What about the gerbils? Yeah, they are still out there. One can see gerbils; it is more difficult to see the AI gerbils. The fix is not the plastic bag filled with gerbils in the garbage can. The AI gerbils are relentless.
Net net: Adapt and accept that AI is here, reproducing rapidly, and evolving. The future means “adapt.” One suggestion: Hire McKinsey & Co. to help your firm make tough decisions. That sometimes works.
Stephen E Arnold, May 14, 2024
AI and Doctors: Close Enough for Horseshoes and More Time for Golf
May 14, 2024
Burnout is a growing pandemic for all industries, but it’s extremely high in medical professions. Doctors and other medical professionals are at incredibly high risk of burnout. The daily stressors of treating patients, paperwork, dealing with insurance agencies, resource limitations, etc. are worsening. Stat News reports that AI algorithms offer a helpful solution for medical professionals, but there are still bugs in the system: “Generative AI Is Supposed To Save Doctors From Burnout. New Data Show It Needs More Training.”
Clinical notes are important for patient care and ongoing treatment. The downside of clinical notes is that it takes a long time to complete the task. Academic hospitals became training grounds for generative AI usage in the medical fields. Generative AI is a tool with a lot of potential, but it’s proven many times that it still needs a lot of work. The large language models for generative AI in medical documentation proved lacking. Is anyone really surprised? Apparently they were:
“Just in the past week, a study at the University of California, San Diego found that use of an LLM to reply to patient messages did not save clinicians time; another study at Mount Sinai found that popular LLMs are lousy at mapping patients’ illnesses to diagnostic codes; and still another study at Mass General Brigham found that an LLM made safety errors in responding to simulated questions from cancer patients. One reply was potentially lethal.”
Why doesn’t common sense prevail in these cases? Yes, generative AI should be tested so the data will back up the logical outcome. It’s called the scientific method for a reason. Why does everyone act surprised, however? Stop reflecting on the obvious of lackluster AI tools and focus on making them better. Use these tests to find the bugs, fix them, and make them practical applications that work. Is that so hard to accomplish?
Whitney Grace, May 14, 2024
Wanna Be Happy? Use the Internet
May 13, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
The glory days of the Internet have faded. Social media, AI-generated baloney, and brain numbing TikTok-esque short videos — Outstanding ways to be happy. What about endless online scams, phishing, and smishing, deep fake voices to grandma from grandchildren needing money — Yes, guaranteed uplifts to sagging spirits.
The idea of a payoff in a coffee shop is silly. Who would compromise academic standards for a latte and a pile of cash. Absolutely no one involved in academic pursuits. Good enough, MSFT Copilot. Good enough.
When I read two of the “real” news stories about how the Internet manufactures happiness, I asked myself, “Exactly what’s with this study?” The PR push to say happy things about online reminded me of the OII or Oxford Internet Institute and some of its other cheerleading. And what is the OII? It is an outfit which receives some university support, funds from private industry, and foundation cash; for example, the Shirley Institute.
In my opinion, it is often difficult to figure out if the “research” is wonky due to its methodology, the desire to keep some sources of funding writing checks, or a nifty way to influence policies in the UK and elsewhere. The magic of the “Oxford” brand gives the outfit some cachet for those who want to collect conference name tags to bedeck their office coat hangers.
The OII is back in the content marketing game. I read the BBC’s “Internet Access Linked to Higher Wellbeing, Study Finds” and the Guardian’s “Internet Use Is Associated with Greater Wellbeing, Global Study Finds.” Both articles are generated from the same PR-type verbiage. But the weirdness of the assertion is undermined by this statement from the BBC’s rewrite of the OII’s PR:
The study was not able to prove cause and effect, but the team found measures of life satisfaction were 8.5% higher for those who had internet access. Nor did the study look at the length of time people spent using the internet or what they used it for, while some factors that could explain associations may not have be considered.
The Oxford brand and the big numbers about a massive sample size cannot hide one awkward fact: There is little evidence that happiness drips from Internet use. Convenience? Yep. Entertainment? Yep. Crime? Yep. Self-harm, drug use or experimentation, meme amplification. Yep, yep, yep.
Several questions arise:
- Why is the message “online is good” suddenly big news? If anything, the idea runs counter to the significant efforts to contain access to potentially harmful online content in the UK and elsewhere. Gee, I wonder if the companies facing some type of sanctions are helping out the good old OII?
- What’s up with Oxford University itself? Doesn’t it have more substantive research to publicize? Perhaps Oxford should emulate the “Naked Scientist” podcast or lobby to get Melvin Bragg to report about more factual matters? Does Oxford have an identity crisis?
- And the BBC and the Guardian! Have the editors lost the plot? Don’t these professionals have first hand knowledge about the impact of online on children and young adults? Don’t they try to talk to their kids or grandkids at the dinner table when the youthful progeny of “real” news people are using their mobile phones?
I like facts which push back against received assumptions. But online is helping out those who use it needs a bit more precision, clearer thinking, and less tenuous cause-and-effect hoo-hah in my opinion.
Stephen E Arnold, May 13, 2024
Apple and a Recycled Carnival Act: Woo Woo New New!
May 13, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
A long time ago, for a project related to a new product which was cratering, one person on my team suggested I read a book by James B. Twitchell. Carnival Culture: The Trashing of Taste in America provided a broad context, but the information in the analysis of taste was not going to save the enterprise software I was supposed to analyze. In general, I suggest that investment outfits with an interest in online information give me a call before writing checks to the tale-spinning entrepreneurs.
A small creative spark getting smashed in an industrial press. I like the eyes. The future of humans in Apple’s understanding of the American datasphere. Wow, look at those eyes. I can hear the squeals of pain, can’t you?
Dr. Twitchell did a good job, in my opinion, of making clear that some cultural actions are larger than a single promotion. Popular movies and people like P.T. Barnum (the circus guy) explain facets of America. These two examples are not just entertaining; they are making clear what revs the engines of the US of A.
I read “Hating Apple Goes Mainstream” and realized that Apple is doing the marketing for which it is famous. The roll out of the iPad had a high resolution, big money advertisement. If you are around young children, squishy plastic toys are often in small fingers. Squeeze the toy and the eyes bulge. In the image above, a child’s toy is smashed in what seems to me be the business end of a industrial press manufactured by MSE Technology Ltd in Turkey.
Thanks, MSFT Copilot. Glad you had time to do this art. I know you are busy on security or is it AI or is AI security or security AI? I get so confused.
The Apple iPad has been a bit of an odd duck. It is a good substitute for crappy Kindle-type readers. We have a couple, but they don’t get much use. Everything is a pain for me because the super duper Apple technology does not detect my fingers. I bought the gizmos so people could review the PowerPoint slides for one of my lectures at a conference. I also experimented with the iPad as a teleprompter. After a couple of tests, getting content on the device, controlling it, and fiddling so the darned thing knew I was poking the screen to cause an action — I put the devices on the shelf.
Forget the specific product, let’s look at the cited write ups comments about the Apple “carnival culture” advertisement. The write up states:
Apple has lost its presumption of good faith over the last five years with an ever-larger group of people, and now we’ve reached a tipping point. A year ago, I’m sure this awful ad would have gotten push back, but I’m also sure we’d heard more “it’s not that big of a deal” and “what Apple really meant to say was…” from the stalwart Apple apologists the company has been able to count on for decades. But it’s awfully quiet on the fan-boy front.
I think this means the attempt to sell sent weird messages about a company people once loved. What’s going on, in my opinion, is that Apple is explaining what technology is going to do to people who once used software to create words, images, and data exhaust will be secondary to cosmetics of technology.
In short, people and their tools will be replaced by a gizmo or gizmos that are similar to bright lights and circus posters. What do these artifacts tell us. My take on the Apple iPad M4 super duper creative juicer is, at this time:
- So what? I have an M2 Air, and it does what I hoped the two touch insensitive iPads would do.
- Why create a form factor that is likely to get crushed when I toss my laptop bad on a security screening belt? Apple’s products are, in my view, designed to be landfill residents.
- Apple knows in its subconscious corporate culture heat sink that smart software, smart services, and dumb users are the future. The wonky expensive high-resolution shouts, “We know you are going to be out of job. You will be like the yellow squishy toy.”
The message Apple is sending is that innovation has moved from utility to entertainment to the carnival sideshow. Put on your clown noses, people. Buy Apple.
Stephen E Arnold, May 13, 2024
Open Source and Open Doors. Bad Actors, Come On In
May 13, 2024
Open source code is awesome, because it allows developers to create projects without paying proprietary fees and it inspires innovation. Open source code, however, has problems especially when bad actors know how to exploit it. OpenSSF shares how a recent open source back door left many people vulnerable: “Open Source Security (OpenSSF) And OpenJS Foundations Issue Alert For Social Engineer Takeovers Of Open Source Projects.”
The OpenJS Foundation hosts billions of JavaScript websites. The foundation recently discovered a social engineering takeover attempt dubbed XZ Utilz backdoor, similar to another hack in the past. The OpenJS Foundation and the Open Source Security Foundation are alerting developers about the threat.
The OpenJS received a series of suspicious emails from various GitHub emails that advised project administrators to update their JavaScript. The update description was vague and wanted the administrators to allow the bad actors access to projects. The scam emails are part of the endless bag of tricks black hat hackers use to manipulate administrators, so they can access source code.
The foundations are warning administrators about the scams and sharing tips about how to recognize scams. Bad actors exploit open source developers:
“These social engineering attacks are exploiting the sense of duty that maintainers have with their project and community in order to manipulate them. Pay attention to how interactions make you feel. Interactions that create self-doubt, feelings of inadequacy, of not doing enough for the project, etc. might be part of a social engineering attack.
Social engineering attacks like the ones we have witnessed with XZ/liblzma were successfully averted by the OpenJS community. These types of attacks are difficult to detect or protect against programmatically as they prey on a violation of trust through social engineering. In the short term, clearly and transparently sharing suspicious activity like those we mentioned above will help other communities stay vigilant. Ensuring our maintainers are well supported is the primary deterrent we have against these social engineering attacks.”
These scams aren’t surprising. There needs to be more organizations like OpenJS and Open Source Security, because their intentions are to protect the common good. They’re on the side of the little person compared to politicians and corporations.
Whitney Grace, May 13, 2024
Will Google Behave Like Telegram?
May 10, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
I posted a short item on LinkedIn about Telegram’s blocking of Ukraine’s information piped into Russia via Telegram. I pointed out that Pavel Durov, the founder of VK and Telegram, told Tucker Carlson that he was into “free speech.” A few weeks after the interview, Telegram blocked the data from Ukraine for Russia’s Telegram users. One reason given, as I recall, was that Apple was unhappy. Telegram rolled over and complied with a request that seems to benefit Russia more than Apple. But that’s just my opinion. The incident, which one of my team verified with a Ukrainian interacting with senior professionals in Ukraine, the block. Not surprisingly, Ukraine’s use of Telegram is under advisement. I think that means, “Find another method of sending encrypted messages and use that.” Compromised communications can translate to “Rest in Peace” in real time.
A Hong Kong rock band plays a cover of the popular hit Glory to Hong Kong. The bats in the sky are similar to those consumed in Shanghai during a bat festival. Thanks, MSFT Copilot. What are you working on today? Security or AI?
I read “Hong Kong Puts Google in Hot Seat With Ban on Protest Song.” That news story states:
The Court of Appeal on Wednesday approved the government’s application for an injunction order to prevent anyone from playing Glory to Hong Kong with seditious intent. While the city has a new security law to punish that crime, the judgment shifted responsibility onto the platforms, adding a new danger that just hosting the track could expose companies to legal risks. In granting the injunction, judges said prosecuting individual offenders wasn’t enough to tackle the “acute criminal problems.”
What’s Google got to do with it that toe tapper Glory to Hong Kong?
The write up says:
The injunction “places Google, media platforms and other social media companies in a difficult position: Essentially pitting values such as free speech in direct conflict with legal obligations,” said Ryan Neelam, program director at the Lowy Institute and former Australian diplomat to Hong Kong and Macau. “It will further the broader chilling effect if foreign tech majors do comply.”
The question is, “Roll over as Telegram allegedly has, or fight Hong Kong and by extension everyone’s favorite streaming video influencer, China?” What will Google do? Scrub Glory to Hong Kong, number one with a bullet on someone’s hit parade I assume.
My guess is that Google will go to court, appeal, and then take appropriate action to preserve whatever revenue is at stake. I do know The Sundar & Prabhakar Comedy Show will not use Glory to Hong Kong as its theme for its 2024 review.
Stephen E Arnold, May 10, 2024