Googzilla: Pointing the Finger of Blame Makes Sense I Guess
June 13, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
Here you are: The Thunder Lizard of Search Advertising. Pesky outfits like Microsoft have been quicker than Billy the Kid shooting drunken farmers when it comes to marketing smart software. But the real problem in Deadwood is a bunch of do-gooders turned into revolutionaries undermining the granite foundation of the Google. I have this information from an unimpeachable source: An alleged Google professional talking on a podcast. The news release titled “Google Engineer Says Sam Altman-Led OpenAI Set Back AI Research Progress By 5-10 Years: LLMs Have Sucked The Oxygen Out Of The Room” explains that the actions of OpenAI is causing the Thunder Lizard to wobble.
One of the team sets himself apart by blaming OpenAI and his colleagues, not himself. Will the sleek, entitled professionals pay attention to this criticism or just hear “OpenAI”? Thanks, MSFT Copilot. Good enough art.
Consider this statement in the cited news release:
He [an employee of the Thunder Lizard] stated that OpenAI has “single-handedly changed the game” and set back progress towards AGI by a significant number of years. Chollet pointed out that a few years ago, all state-of-the-art results were openly shared and published, but this is no longer the case. He attributed this change to OpenAI’s influence, accusing them of causing a “complete closing down of frontier research publishing.”
I find this interesting. One company, its deal with Microsoft, and that firm’s management meltdown produced a “complete closing down of frontier research publishing.” What about the Dr. Timnit Gebru incident about the “stochastic parrot”?
The write up included this gem from the Googley acolyte of the Thunder Lizard of Search Advertising:
He went on to criticize OpenAI for triggering hype around Large Language Models or LLMs, which he believes have diverted resources and attention away from other potential areas of AGI research.
However, DeepMind — apparently the nerve center of the one best way to generate news releases about computational biology — has been generating PR. That does not count because its is real world smart software I assume.
But there are metrics to back up the claim that OpenAI is the Great Destroyer. The write up says:
Chollet’s [the Googler, remember?] criticism comes after he and Mike Knoop, [a non-Googler] the co-founder of Zapier, announced the $1 million ARC-AGI Prize. The competition, which Chollet created in 2019, measures AGI’s ability to acquire new skills and solve novel, open-ended problems efficiently. Despite 300 teams attempting ARC-AGI last year, the state-of-the-art (SOTA) score has only increased from 20% at inception to 34% today, while humans score between 85-100%, noted Knoop. [emphasis added, editor]
Let’s assume that the effort and money poured into smart software in the last 12 months boosted one key metric by 14 percent. Doesn’t’ that leave LLMs and smart software in general far, far behind the average humanoid?
But here’s the killer point?
… training ChatGPT on more data will not result in human-level intelligence.
Let’s reflect on the information in the news release.
- If the data are accurate, LLM-based smart software has reached a dead end. I am not sure the law suits will stop, but perhaps some of the hyperbole will subside?
- If these insights into the weaknesses of LLMs, why has Google continued to roll out services based on a dead-end model, suffer assorted problems, and then demonstrated its management prowess by pulling back certain services?
- Who is running the Google smart software business? Is it the computationalists combining components of proteins or is the group generating blatantly wonky images? A better question is, “Is anyone in charge of non-advertising activities at Google?”
My hunch is that this individual is representing a percentage of a fractionalized segment of Google employees. I do not think a senior manager is willing to say, “Yes, I am responsible.” The most illuminating facet of the article is the clear cultural preference at Google: Just blame OpenAI. Failing that, blame the users, blame the interns, blame another team, but do not blame oneself. Am I close to the pin?
Stephen E Arnold, June 13, 2024
Modern Elon Threats: Tossing Granola or Grenades
June 13, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
Bad me. I ignored the Apple announcements. I did spot one interesting somewhat out-of-phase reaction to Tim Apple’s attempt to not screw up again. “Elon Musk Calls Apple Devices with ChatGPT a Security Violation.” Since the Tim Apple crowd was learning about what was “to be,” not what is, this statement caught my attention:
If Apple integrates OpenAI at the OS level, then Apple devices will be banned at my companies. That is an unacceptable security violation.
I want to comment about the implicit “then” in this remarkable prose output from Elon Musk. On the surface, the “then” is that the most affluent mobile phone users will be prohibited from the X.com service. I wonder how advertisers are reacting to this idea of cutting down the potential eyeballs for their product if advertised to an group of prospects no longer clutching Apple iPhones. I don’t advertise, but I can game out how the meetings between the company with advertising dollars and the agency helping the company make informed advertising decisions. (Let’s assume that advertising “works”, and advertising outfits are informed for the purpose of this blog post.)
A tortured genius struggles against the psychological forces that ripped the Apple car from the fingers of its rightful owner. Too bad. Thanks, MSFT Copilot. How is your coding security coming along? What about the shut down of the upcharge for Copilot? Oh, no answer. That’s okay. Good enough.
Let’s assume Mr. Musk “sees” something a dinobaby like me cannot. What’s with the threat logic? The loss of a beloved investment? A threat to a to-be artificial intelligence company destined to blast into orbit on a tower of intellectual rocket fuel? Mr. Musk has detected a signal. He has interpreted. And he has responded with an ultimatum. That’s pretty fast action, even for a genius. I started college in 1962, and I dimly recall a class called Psych 101. Even though I attended a low-ball institution, the knowledge value of the course was evident in the large and shabby lecture room with a couple of hundred seats.
Threats, if I am remembering something that took place 62 years ago, tell more about the entity issuing the threat than the actual threat event itself. The words worming from the infrequently accessed cupboards of my mind are linked to an entity wanting to assert, establish, or maintain some type of control. Slapping quasi-ancient psycho-babble on Mr. Musk is not fair to the grand profession of psychology. However, it does appear to reveal that whatever Apple thinks it will do in its “to be”, coming-soon service struck a nerve into Mr. Musk’s super-bright, well-developed brain.
I surmise there is some insecurity with the Musk entity. I can’t figure out the connection between what amounts to vaporware and a threat to behead or de-iPhone a potentially bucket load of prospects for advertisers to pester. I guess that’s why I did not invent the Cybertruck, a boring machine, and a rocket ship.
But a threat over vaporware in a field which has demonstrated that Googzilla, Microsoft, and others have dropped their baskets of curds and whey is interesting. The speed with which Mr. Musk reacts suggests to me that he perceives the Apple vaporware as an existential threat. I see it as another big company trying to grab some fruit from the AI tree until the bubble deflates. Software does have a tendency to disappoint, build up technical debt, and then evolve to the weird service which no one can fix, change, or kill because meaningful competition no longer exists. When will the IRS computer systems be “fixed”? When will airline reservations systems serve the customer? When will smart software stop hallucinating?
I actually looked up some information about threats from the recently disgraced fake research publisher John Wiley & Sons. “Exploring the Landscape of Psychological Threat” reminded me why I thought psychology was not for me. With weird jargon and some diagrams, the threat may be linked to Tesla’s rumored attempt to fall in love with Apple. The product of this interesting genetic bonding would be the Apple car, oodles of cash for Mr. Musk, and the worshipful affection of the Apple acolytes. But the online date did not work out. Apple swiped Tesla into the loser bin. Now Mr. Musk can get some publicity, put X.com (don’t you love Web sites that remind people of pornography on the Dark Web?) in the news, and cause people like me to wonder. “Why dump on Apple?” (The outfit has plenty of worries with the China thing, doesn’t it? What about some anti-trust action? What about the hostility of M3 powered devices?)
Here’s my take:
- Apple Intelligence is a better “name” than Mr. Musk’s AI company xAI. Apple gets to use “AI” but without the porn hook.
- A controversial social media emission will stir up the digital elite. Publicity is good. Just ask Michael Cimino of Heaven’s Gate fame?
- Mr. Musk’s threat provides an outlet for the failure to make Tesla the Apple car.
What if I am wrong? [a] I don’t care. I don’t use an iPhone, Twitter, or online advertising. [b] A GenX, Y, or Z pooh-bah will present the “truth” and set the record straight. [c] Mr. Musk’s threat will be like the result of a Boring Company operation. A hole, a void.
Net net: Granola. The fast response to what seems to be “coming soon” vaporware suggests a potential weak spot in Mr. Musk’s make up. Is Apple afraid? Probably not. Is Mr. Musk? Yep.
Stephen E Arnold, June 13, 2024
Detecting AI-Generated Research Increasingly Difficult for Scientific Journals
June 12, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
Reputable scientific journals would like to only publish papers written by humans, but they are finding it harder and harder to enforce that standard. Researchers at the University of Chicago Medical Center examined the issue and summarize their results in, “Detecting Machine-Written Content in Scientific Articles,” published at Medical Xpress. Their study was published in Journal of Clinical Oncology Clinical Cancer Informatics on June 1. We presume it was written by humans.
The team used commercial AI detectors to evaluate over 15,000 oncology abstracts from 2021-2023. We learn:
“They found that there were approximately twice as many abstracts characterized as containing AI content in 2023 as compared to 2021 and 2022—indicating a clear signal that researchers are utilizing AI tools in scientific writing. Interestingly, the content detectors were much better at distinguishing text generated by older versions of AI chatbots from human-written text, but were less accurate in identifying text from the newer, more accurate AI models or mixtures of human-written and AI-generated text.”
Yes, that tracks. We wonder if it is even harder to detect AI generated research that is, hypothetically, run through two or three different smart rewrite systems. Oh, who would do that? Maybe the former president of Stanford University?
The researchers predict:
“As the use of AI in scientific writing will likely increase with the development of more effective AI language models in the coming years, Howard and colleagues warn that it is important that safeguards are instituted to ensure only factually accurate information is included in scientific work, given the propensity of AI models to write plausible but incorrect statements. They also concluded that although AI content detectors will never reach perfect accuracy, they could be used as a screening tool to indicate that the presented content requires additional scrutiny from reviewers, but should not be used as the sole means to assess AI content in scientific writing.”
That makes sense, we suppose. But humans are not perfect at spotting AI text, either, though there are ways to train oneself. Perhaps if journals combine savvy humans with detection software, they can catch most AI submissions. At least until the next generation of ChatGPT comes out.
Cynthia Murrell, June 12, 2024
What Is McKinsey & Co. Telling Its Clients about AI?
June 12, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
Years ago (decades now) I attended a meeting at the firm’s technology headquarters in Bethesda, Maryland. Our carpetland welcomed the sleek, well-fed, and super entitled Booz, Allen & Hamilton professionals to a low-profile meeting to discuss the McKinsey PR problem. I attended because my boss (the head of the technology management group) assumed I would be invisible to the Big Dog BAH winners. He was correct. I was an off-the-New-York radar “manager,” buried in an obscure line item. So there I was. And what was the subject of this periodic meeting? The Harvard Business Review-McKinsey Award. The NY Booz, Allen consultants failed to come up with this idea. McKinsey did. As a result, the technology management group (soon to overtake the lesser MBA side of the business) had to rehash the humiliation of not getting associated with the once-prestigious Harvard University. (The ethics thing, the medical research issue, and the protest response have tarnished the silver Best in Show trophy. Remember?)
One of the most capable pilots found himself answering questions from a door-to-door salesman covering his territory somewhere west of Terre Haute. The pilot who has survived but sits amidst a burning experimental aircraft ponders an important question, “How can I explain that the crash was not my fault?” Thanks, MSFT Copilot. Have you ever found yourself in a similar situation? Can you “recall” one?
Now McKinsey has AI data. Actual hands-on, unbillable work product with smart software. Is the story in the Harvard Business Review? A Netflix documentary? A million-view TikTok hit? A “60 Minutes” segment? No, nyet, unh-unh, negative. The story appears in Joe Mansueto’s Fast Company Magazine! Mr. Mansueto founded Morningstar and has expanded his business interests to online publications and giving away some of his billions.
The write up is different from McKinsey’s stentorian pontifications. It is a bit like mining coal in a hard rock dig deep underground. It was a dirty, hard, and ultimately semi-interesting job. Smart software almost broke the McKinsey marvels.
“We Spent Nearly a Year Building a Generative AI Tool. These Are the 5 (Hard) Lessons We Learned” presents information which would have been marketing gold for the McKinsey decades ago. But this is 2024, more than 18 months after Microsoft’s OpenAI bomb blast at Davos.
What did McKinsey “learn”?
McKinsey wanted to use AI to “bring together the company’s vast but separate knowledge sources.” Of course, McKinsey’s knowledge is “vast.” How could it be tiny. The firm’s expertise in pharmaceutical efficiency methods exceeds that of many other consulting firms. What’s more important profits or deaths? Answer: I vote for profits, doesn’t everyone except for a few complainers in Eastern Kentucky, West Virginia, and other flyover states.
The big reveal in the write up is that McKinsey & Co learned that its “vast” knowledge is fragmented and locked in Microsoft PowerPoint slides. After the non-billable overhead work, the bright young future corporate leaders discovered that smart software could only figure out about 15 percent of the knowledge payload in a PowerPoint document. With the vast knowledge in PowerPoint, McKinsey learned that smart software was a semi-helpful utility. The smart software was not able to “readily access McKinsey’s knowledge, generate insights, and thus help clients” or newly-hired consultants do better work, faster, and more economically. Nope.
So what did McKinsey’s band of bright smart software wizards do? The firm coded up its own content parser. How did that home brew software work? The grade is a solid B. The cobbled together system was able to make sense of 85 percent of a PowerPoint document. The other 15 percent gives the new hires something to do until a senior partner intervenes and says, “Get billable or get gone, you very special buttercup.” Non-billable and a future at McKinsey are not like peanut butter and jelly.
How did McKinsey characterize its 12-month journey into the reality of consulting baloney? The answer is a great one. Here it is:
With so many challenges and the need to work in a fundamentally new way, we described ourselves as riding the “struggle bus.”
Did the McKinsey workers break out into work songs to make the drudgery of deciphering PowerPoints go more pleasantly? I am think about the Coal Miners Boogie by George Davis, West Virginia Mine Disaster by Jean Ritchi, or my personal favorite Black Dust Fever by the Wildwood Valley Boys.
But the workers bringing brain to reality learned five lessons. One can, I assume, pay McKinsey to apply these lessons to a client firm experiencing a mental high from thinking about the payoffs from AI. On the other hand, consider these in this free blog post with my humble interpretation:
- Define a shared aspiration. My version: Figure out what you want to do. Get a plan. Regroup if the objective and the method don’t work or make much sense.
- Assemble a multi-disciplinary team. My version: Don’t load up on MBAs. Get individuals who can code, analyze content, and tap existing tools to accomplish specific tasks. Include an old geezer partner who can “explain” what McKinsey means when it suggests “managerial evolution.” Skip the ape to MBA cartoons.
- Put the user first. My version: Some lesser soul will have to use the system. Make sure the system is usable and actually works. Skip the minimum viable product and get to the quality of the output and the time required to use the system or just doing the work the old-fashioned way.
- Tech, learn, repeat. Covert the random walk into a logical and efficient workflow. Running around with one’s hair on fire is not a methodical process nor a good way to produce value.
- Measure and manage. My version: Fire those who failed. Come up with some verbal razzle-dazzle and sell the planning and managing work to a client. Do not do this work on overhead for the consultants who are billable.
What does the great reveal by McKinsey tell me. First, the baloney about “saving an average of up to 30 percent of a consultants’ time by streamlining information gathering and synthesis” sounds like the same old, same old pitched by enterprise search vendors for decades. The reality is that online access to information does not save time; it creates more work, particularly when data voids are exposed. Those old dog partners are going to have to talk with young consultants. No smart software is going to eliminate that task no matter how many senior partners want a silver bullet to kill the beast of a group of beginners.
The second “win” is the idea that “insights are better.” Baloney. Flipping through the famous executive memos to a client, reading the reports with the unaesthetic dash points, and looking at the slide decks created by coal miners of knowledge years ago still has to be done… by a human who is sober, motivated, and hungry for peer recognition. Software is not going to have the same thirst for getting a pat on the head and in some cases on another part of the human frame.
The struggle bus is loading up no. Just hire McKinsey to be the driver, the tour guide, and the outfit that collects the fees. One can convert failure into billability. That’s what the Fast Company write up proves. Eleven months and all they got was a ride on the digital equivalent of the Cybertruck which turned out to be much-hyped struggle bus?
AI may ultimately rule the world. For now, it simply humbles the brilliant minds at McKinsey and generates a story for Fast Company. Well, that’s something, isn’t it? Now about spinning that story.
Stephen E Arnold, June 12, 2024
MSFT: Security Is Not Job One. News or Not?
June 11, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
The idea that free and open source software contains digital trap falls is one thing. Poisoned libraries which busy and confident developers snap into their software should not surprise anyone. What I did not expect was the information in “Malicious VSCode Extensions with Millions of Installs Discovered.” The write up in Bleeping Computer reports:
A group of Israeli researchers explored the security of the Visual Studio Code marketplace and managed to “infect” over 100 organizations by trojanizing a copy of the popular ‘Dracula Official theme to include risky code. Further research into the VSCode Marketplace found thousands of extensions with millions of installs.
I heard the “Job One” and “Top Priority” assurances before. So far, bad actors keep exploiting vulnerabilities and minimal progress is made. Thanks, MSFT Copilot, definitely close enough for horseshoes.
The write up points out:
Previous reports have highlighted gaps in VSCode’s security, allowing extension and publisher impersonation and extensions that steal developer authentication tokens. There have also been in-the-wild findings that were confirmed to be malicious.
How bad can this be? This be bad. The malicious code can be inserted and happily delivers to a remote server via an HTTPS POST such information as:
the hostname, number of installed extensions, device’s domain name, and the operating system platform
Clever bad actors can do more even if the information they have is the description and code screen shot in the Bleeping Computer article.
Why? You are going to love the answer suggested in the report:
“Unfortunately, traditional endpoint security tools (EDRs) do not detect this activity (as we’ve demonstrated examples of RCE for select organizations during the responsible disclosure process), VSCode is built to read lots of files and execute many commands and create child processes, thus EDRs cannot understand if the activity from VSCode is legit developer activity or a malicious extension.”
That’s special.
The article reports that the research team poked around in the Visual Studio Code Marketplace and discovered:
- 1,283 items with known malicious code (229 million installs).
- 8,161 items communicating with hardcoded IP addresses.
- 1,452 items running unknown executables.
- 2,304 items using another publisher’s GitHub repo, indicating they are a copycat.
Bleeping Computer says:
Microsoft’s lack of stringent controls and code reviewing mechanisms on the VSCode Marketplace allows threat actors to perform rampant abuse of the platform, with it getting worse as the platform is increasingly used.
Interesting.
Let’s step back. The US Federal government prodded Microsoft to step up its security efforts. The MSFT leadership said, “By golly, we will.”
Several observations are warranted:
- I am not sure I am able to believe anything Microsoft says about security
- I do not believe a “culture” of security exists within Microsoft. There is a culture, but it is not one which takes security seriously after a butt spanking by the US Federal government and Microsoft Certified Partners who have to work to address their clients issues. (How do I know this? On Wednesday, June 8, 2024, at the TechnoSecurity & Digital Forensics Conference told me, “I have to take a break. The security problems with Microsoft are killing me.”
- The “leadership” at Microsoft is loved by Wall Street. However, others fail to respond with hearts and flowers.
Net net: Microsoft poses a grave security threat to government agencies and the users of Microsoft products. Talking with dulcet tones may make some people happy. I think there are others who believe Microsoft wants government contracts. Its employees want an easy life, money, and respect. Would you hire a former Microsoft security professional? This is not a question of trust; this is a question of malfeasance. Smooth talking is the priority, not security.
Stephen E Arnold, June 11, 2024
AI and Ethical Concerns: Sure, When “Ethics” Means Money
June 11, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
It seems workers continue to flee OpenAI over ethical concerns. The Byte reports, “Another OpenAI Researcher Quits, Issuing Cryptic Warning.” Understandably unwilling to disclose details, policy researcher Gretchen Kreuger announced her resignation on X. She did express a few of her concerns in broad strokes:
“We need to do more to improve foundational things, like decision-making processes; accountability; transparency; documentation; policy enforcement; the care with which we use our own technology; and mitigations for impacts on inequality, rights, and the environment.”
Kreuger emphasized these important issues not only affect communities now but also influence who controls the direction of pervasive AI systems in the future. Right now, that control is in the hands of the tech bros running AI firms. Writer Maggie Harrison Dupré notes Krueger’s departure comes as OpenAI is dealing with a couple of scandals. Other high-profile resignations have also occurred in recent months. We are reminded:
“[Recent] departures include that of Ilya Sutskever, who served as OpenAI’s chief scientist, and Jan Leike, a top researcher on the company’s now-dismantled ’Superalignment’ safety team — which, in short, was the division effectively in charge of ensuring that a still-theoretical human-level AI wouldn’t go rogue and kill us all. Or something like that. Sutskever was also a leader within the Superalignment division. And to that end, it feels very notable that all three of these now-ex-OpenAI workers were those who worked on safety and policy initiatives. It’s almost as if, for some reason, they felt as though they were unable to successfully do their job in ensuring the safety and security of OpenAI’s products — part of which, of course, would reasonably include creating pathways for holding leadership accountable for their choices.”
Yes, most of us would find that reasonable. For members of that leadership, though, it seems escaping accountability is a top priority.
Cynthia Murrell, June 11, 2024
Publishers Sign Up for the Great Unknown: Risky, Oh, Yeah
June 7, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
OpenAI is paying for content. Why? Maybe to avoid lawsuits? Maybe to get access to “real” news to try to get ahead of its perceived rivals? Maybe because Sam AI-Man pushes forward while its perceived competitors do weird things like add features, launch services which are lousy, or which have the taste of the bitter fruit of Zuckus nepenthes.
Publishers are like beavers. Publishers have to do whatever they can to generate cash. Thanks, MSFT Copilot. Good enough. Not a cartoon and not a single dam, but just like MSFT security good enough, today’s benchmark of excellence.
“Journalists Deeply Troubled by OpenAI’s Content Deals with Vox, The Atlantic” is a good example of the angst Sam AI-Man is causing among “real” news outfits and their Fourth Estate professionals. The write up reports:
“Alarmed” writers unions question transparency of AI training deals with ChatGPT maker.
Oh, oh. An echo of Google’s Code Red am I hearing? No, what I hear is the ka-ching of the bank teller’s deposit system as the “owner” of the Fourth Estate professional business process gets Sam AI-Man’s money. Let’s not confuse “real” news with “real” money, shall we? In the current economic climate, money matters. Today it is difficult to sell advertising unless one is a slam dunk monopoly with an ad sales system that is tough to beat. Today it is tough to get those who consume news via a podcast or a public Web site to subscribe. I think that the number I heard for conversions is something like one or two subscribers per 100 visitors on a really good day. Most days are not really good.
“Real” journalists can be unionized. The idea is that their services have to be protected from the lawyers and bean counters who run many high profile publishing outfit. The problem with unions is that these seek to limit what the proprietors can do in a largely unregulated capitalist set up like the one operating within the United States. In a long-forgotten pre-digital era, those in a union dust up in 1921 at Blair Mountain in my favorite state, West Virginia. Today, the union members are more likely to launch social media posts and hook up with a needy lawyering outfit.
Let me be clear. Some of the “real” journalists will find fame as YouTubers, pundits on what’s left of traditional TV or cable news programs, or by writing a book which catches the attention of Netflix. Most, however, will do gig work and migrate to employment adjacent to “real” news. The problem is that in any set of “real” journalists, the top 10 percent will be advantaged. The others may head to favelas, their parent’s basement, or a Sheetz parking lot in my favorite state for some chemical relief. Does that sound scary?
Think about this.
Sam AI-Man, according to the Observer’s story “Sam Altman Says OpenAI Doesn’t Fully Understand How GPT Works Despite Rapid Progress.” These money-focused publishers are signing up for something that not only do they not understand but the fellow who is surfing the crazy wave of smart software does not understand. But taking money and worrying about the future is not something publishing executives in their carpetlands think about. Money in hand is good. Worrying about the future, according to their life coach, is not worth the mental stress. It is go-go in a now-now moment.
I cannot foretell the future. If I could, I would not be an 80-year-old dinobaby sitting in my home office marveling at the downstream consequences of what amounts to a 2024 variant of the DR-LINK technology. I can offer a handful of hypotheses:
- “Real” journalists are going to find that publishers cut deals to get cash without thinking of the “real” journalists or the risks inherent in hopping in a small cabin with Sam AI-Man for a voyage in the unknown.
- Money and cost reductions will fuel selling information to Sam AI-Man and any other Big Tech outfit which comes calling with a check book. Money now is better than looking at a graph of advertising sales over the last five years. Money trumps “real” journalists’ complaints when they are offered part-time work or an opportunity to find their future elsewhere.
- Publishing outfits have never been technology adept, and I think that engineered blindness is now built into the companies’ management processes. Change is going to make publishing an interesting business. That’s good for consultants and bankruptcy specialists. It will not be so good for those who do not have golden parachutes or platinum flying cars.
Net net: What are the options for the “real” journalists’ unions? Lawyers, maybe. Social media posts. Absolutely. Will these prevent publishers from doing what publishers have to do? Nope.
Stephen E Arnold, June 7, 2024
Meta Deletes Workplace. Why? AI!
June 7, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
Workplace was Meta’s attempt to jump into the office-productivity ring and face off against the likes of Slack and MS Teams. It did not fare well. Yahoo Finance shares the brief write-up, “Meta Is Shuttering Workplace, Its Enterprise Version of Facebook.” The company is spinning the decision as a shift to bigger and better things. Bloomberg’s Kurt Wagner cites reporting from TechCrunch as she writes:
“The service operated much like the original Facebook social network, but let people have separate accounts for their work interactions. Workplace had as many as 7 million total paying subscribers in May 2021. … Meta once had ambitious plans for Workplace, and viewed it as a way to make money through subscriptions as well as a chance to extend Facebook’s reach by infusing the product into work and office settings. At one point, Meta touted a list of high-profile customers, including Starbucks Corp., Walmart Inc. and Spotify Technology SA. The company will continue to focus on workplace-related products, a spokesperson said, but in other areas, such as the metaverse by building features for the company’s Quest VR headsets.”
The Meta spokesperson repeated the emphasis on those future products, also stating:
“We are discontinuing Workplace from Meta so we can focus on building AI and metaverse technologies that we believe will fundamentally reshape the way we work.”
Meta will continue to use Workplace internally, but everyone else has until the end of August 2025 before the service ends. Meta plans to keep user data accessible until the end of May 2026. The company also pledges to help users shift to Zoom’s Workvivo platform. What, no forced migration into the Metaverse and their proprietary headsets? Not yet, anyway.
Cynthia Murrell, June 7, 2024
OpenAI: Deals with Apple and Microsoft Squeeze the Google
June 6, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
Do you remember your high school biology class? You may have had a lab partner, preferably a person with dexterity and a steady hand. Dissecting creatures and having recognizable parts was important. Otherwise, how could one identify the components when everything was a glutinous mash up of white, red, pink, gray, and — yes — even green?
That’s how I interpret the OpenAI deals the company has with Apple and Microsoft. What are these two large, cash-rich, revenue hungry companies going to do? The illustration suggest that the two was to corral Googzilla, put the beastie in a stupor, and then take the creature apart.
The little Googzilla is in the lab. Two wizards are going to try to take the creature apart. One of the bio-data operators is holding tweezers to grab the beastie and place it on an adhesive gel pad. The other is balancing the creature to reassure it that it may once again be allowed to roam free in a digital Roatan. The bio-data experts may have another idea. Thanks, MSFT. Did you know you are the character with the tweezers?
Well, maybe the biology lab metaphor is not appropriate. Oh, heck, I am going to stick with the trope. Microsoft has rammed Copilot and its other AI deals in front of Windows users world wide. Now Apple, late to the AI game, went to the AI dance hall and picked the star-crossed OpenAI as a service it would take to the smart software recital.
If you want to get some color about Apple and OpenAI, navigate to “Apple and OpenAI Allegedly Reach Deal to Bring ChatGPT Functionality to iOS 18.”
I want to focus on what happens before the lab partners try to chop up the little Googzilla.
Here are the steps:
- Use tweezers to grab the beastie
- Squeeze the tweezers to prevent the beastie from escaping to the darkness under the lab cabinets
- Gently lift the beastie
- Place the beastie on the adhesive gel.
I will skip the part of process which involves anesthetizing the beastie and beginning the in vivo procedures. Just use your imagination.
Now back to the four steps. My view is that neither Apple nor Microsoft will actively cooperate to make life difficult for the baby Googzilla, which represents a fledgling smart software activity. Here’s my vision.
Apple will do what Apple does, just with OpenAI and ChatGPT. At some point, Apple, which is a kind and gentle outfit, may not chop off Googzilla’s foot. Apple may offer the beastie a reprieve. After all, Apple knows Google will pay big bucks to be the default search engine for Safari. The foot remains attached, but there is some shame attached at being number two. No first prize, just a runner up: How is that for a creature who views itself as the world’s smartest, slickest, most wonderfulest entity? Answer: Bad.
The squeezing will be uncomfortable. But what can the beastie do. The elevation causes the beastie to become lightheaded. Its decision making capability, already suspect, becomes more addled and unpredictable.
Then the adhesive gel. Mobility is impaired. Fear causes the beastie’s heart to pound. The beastie becomes woozy. The beastie is about to wonder if it will survive.
To sum up the situation: The Google is hampered by:
- A competitor in AI which has cut deals that restrict Google to some degree
- The parties to the OpenAI deal are out for revenue which is thicker than blood
- Google has demonstrated a loss of some management capability and that may deteriorate at a more rapid pace.
Today’s world may be governed by techno-feudalists, and we are going to get a glimpse of what happens when a couple of these outfits tag team a green beastie. This will be an interesting situation to monitor.
Stephen E Arnold, June 6, 2024
Large Dictators. Name the Largest
June 6, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
I read “Social Media Bosses Are the Largest Dictators, Says Nobel Peace Prize Winner.” I immediately thought of “fat” dictators; for example, Benito Mussolini, but I may have him mixed up with Charles Laughton in “Mutiny on the Bounty.”
A mother is trying to implement the “keep your kids off social media” recommendation. Thanks, MSFT Copilot. Good enough.
I think the idea intended is something along the lines of “unregulated companies and their CEOs have more money and power than some countries. These CEOs act like dictators on a par with Julius Caesar. Brutus and friends took out Julius, but the heads of technopolies are indifferent to laws, social norms, and the limp limbs of ethical behavior.”
That’s a lot of words. Ergo: Largest dictators is close enough for horseshoes. It is 2024, and no one wants old-fashioned ideas like appropriate business activities to get in the way of making money and selling online advertising.
The write up shares the quaint ideas of a Noble Peace Prize winner. Here are the main points about social media and technology by someone who is interested in peace:
- Tech bros are dictators with considerable power over information and ideas
- Tech bros manipulate culture, language, and behavior
- The companies these dictators runs “change the way we feel” and “change the way we see the world and change the way we act”
I found this statement from the article suggestive:
“In the Philippines, it was rich versus poor. In the United States, it’s race,” she said. “Black Lives Matter … was bombarded on both sides by Russian propaganda. And the goal was not to make people believe one thing. The goal was to burst this wide open to create chaos.” The way tech companies are “inciting polarization, inciting fear and anger and hatred” changes us “at a personal level, a societal level”, she said.
What’s the fix? A speech? Two actions are needed:
- Dump the protection afforded the dictators by the 1996 Communications Decency Act
- Prevent children from using social media.
Now it is time for a reality check. Changing the Communications Decency Act will take some time. Some advocates have been chasing this legal Loch Ness monster for years. The US system is sensitive to “laws” and lobbyists. Change is slow and regulations are often drafted by lobbyists. Therefore, don’t hold your breath on revising the CDA by the end of the week.
Second, go to a family-oriented restaurant in the US. How many of the children have mobile phones? Now, be a change expert, and try to get the kids at a nearby table to give you their mobile devices. Let me know how that works out, please.
Net net: The Peace Prize winner’s ideas are interesting. That’s about it. And the fat dictators? Keto diets and chemicals do the trick.
Stephen E Arnold, June 6, 2024