The UK AI Safety Institute: Coming to America
June 13, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
One of Neil Diamond’s famous songs is “They’re Coming To America” that explains an immigrant’s journey to the US to pursue to the America. The chorus is the most memorable part of the song, because it repeats the word “today” until the conclusion. In response to the growing concerns about AI’s impact society, The Daily Journal reports that, “UK’s AI Safety Institute Expands To The US, Set To Open US Counterpart In San Francisco.” AI safety is coming to America today or summer 2024.
The Daily Journal’s Bending feature details the following:
“The UK announced it will open a US counterpart to its AI Safety Summit institute in San Francisco this summer to test advanced AI systems and ensure their safety. The expansion aims to recruit a research director and technical staff headed in San Francisco and increase cooperation with the US on AI safety issues. The original UK AI Safety Institute currently has a team of 30 people and is chaired by tech entrepreneur Ian Hogarth. Since being founded last year, the Institute has tested several AI models on challenges but they still struggle with more advanced tests and producing harmful outputs.”
The UK and US will shape the global future of the AI. Because of their prevalence in western, capitalist societies, the these countries are home to huge tech companies. These tech companies are profit driven and often forgo safety and security for it. They forget the importance of the consumer and the common good. Thankfully there are organizations that fight for consumers’ rights and ensuring there will accountability. On the other hand, these organizations can be just as damaging. Is this a lesser of two evils or the evil that you know situation? Oh well, the UK Safety Summit Institute is coming to America!
Whitney Grace, June 13, 2024
Detecting AI-Generated Research Increasingly Difficult for Scientific Journals
June 12, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
Reputable scientific journals would like to only publish papers written by humans, but they are finding it harder and harder to enforce that standard. Researchers at the University of Chicago Medical Center examined the issue and summarize their results in, “Detecting Machine-Written Content in Scientific Articles,” published at Medical Xpress. Their study was published in Journal of Clinical Oncology Clinical Cancer Informatics on June 1. We presume it was written by humans.
The team used commercial AI detectors to evaluate over 15,000 oncology abstracts from 2021-2023. We learn:
“They found that there were approximately twice as many abstracts characterized as containing AI content in 2023 as compared to 2021 and 2022—indicating a clear signal that researchers are utilizing AI tools in scientific writing. Interestingly, the content detectors were much better at distinguishing text generated by older versions of AI chatbots from human-written text, but were less accurate in identifying text from the newer, more accurate AI models or mixtures of human-written and AI-generated text.”
Yes, that tracks. We wonder if it is even harder to detect AI generated research that is, hypothetically, run through two or three different smart rewrite systems. Oh, who would do that? Maybe the former president of Stanford University?
The researchers predict:
“As the use of AI in scientific writing will likely increase with the development of more effective AI language models in the coming years, Howard and colleagues warn that it is important that safeguards are instituted to ensure only factually accurate information is included in scientific work, given the propensity of AI models to write plausible but incorrect statements. They also concluded that although AI content detectors will never reach perfect accuracy, they could be used as a screening tool to indicate that the presented content requires additional scrutiny from reviewers, but should not be used as the sole means to assess AI content in scientific writing.”
That makes sense, we suppose. But humans are not perfect at spotting AI text, either, though there are ways to train oneself. Perhaps if journals combine savvy humans with detection software, they can catch most AI submissions. At least until the next generation of ChatGPT comes out.
Cynthia Murrell, June 12, 2024
What Is McKinsey & Co. Telling Its Clients about AI?
June 12, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
Years ago (decades now) I attended a meeting at the firm’s technology headquarters in Bethesda, Maryland. Our carpetland welcomed the sleek, well-fed, and super entitled Booz, Allen & Hamilton professionals to a low-profile meeting to discuss the McKinsey PR problem. I attended because my boss (the head of the technology management group) assumed I would be invisible to the Big Dog BAH winners. He was correct. I was an off-the-New-York radar “manager,” buried in an obscure line item. So there I was. And what was the subject of this periodic meeting? The Harvard Business Review-McKinsey Award. The NY Booz, Allen consultants failed to come up with this idea. McKinsey did. As a result, the technology management group (soon to overtake the lesser MBA side of the business) had to rehash the humiliation of not getting associated with the once-prestigious Harvard University. (The ethics thing, the medical research issue, and the protest response have tarnished the silver Best in Show trophy. Remember?)
One of the most capable pilots found himself answering questions from a door-to-door salesman covering his territory somewhere west of Terre Haute. The pilot who has survived but sits amidst a burning experimental aircraft ponders an important question, “How can I explain that the crash was not my fault?” Thanks, MSFT Copilot. Have you ever found yourself in a similar situation? Can you “recall” one?
Now McKinsey has AI data. Actual hands-on, unbillable work product with smart software. Is the story in the Harvard Business Review? A Netflix documentary? A million-view TikTok hit? A “60 Minutes” segment? No, nyet, unh-unh, negative. The story appears in Joe Mansueto’s Fast Company Magazine! Mr. Mansueto founded Morningstar and has expanded his business interests to online publications and giving away some of his billions.
The write up is different from McKinsey’s stentorian pontifications. It is a bit like mining coal in a hard rock dig deep underground. It was a dirty, hard, and ultimately semi-interesting job. Smart software almost broke the McKinsey marvels.
“We Spent Nearly a Year Building a Generative AI Tool. These Are the 5 (Hard) Lessons We Learned” presents information which would have been marketing gold for the McKinsey decades ago. But this is 2024, more than 18 months after Microsoft’s OpenAI bomb blast at Davos.
What did McKinsey “learn”?
McKinsey wanted to use AI to “bring together the company’s vast but separate knowledge sources.” Of course, McKinsey’s knowledge is “vast.” How could it be tiny. The firm’s expertise in pharmaceutical efficiency methods exceeds that of many other consulting firms. What’s more important profits or deaths? Answer: I vote for profits, doesn’t everyone except for a few complainers in Eastern Kentucky, West Virginia, and other flyover states.
The big reveal in the write up is that McKinsey & Co learned that its “vast” knowledge is fragmented and locked in Microsoft PowerPoint slides. After the non-billable overhead work, the bright young future corporate leaders discovered that smart software could only figure out about 15 percent of the knowledge payload in a PowerPoint document. With the vast knowledge in PowerPoint, McKinsey learned that smart software was a semi-helpful utility. The smart software was not able to “readily access McKinsey’s knowledge, generate insights, and thus help clients” or newly-hired consultants do better work, faster, and more economically. Nope.
So what did McKinsey’s band of bright smart software wizards do? The firm coded up its own content parser. How did that home brew software work? The grade is a solid B. The cobbled together system was able to make sense of 85 percent of a PowerPoint document. The other 15 percent gives the new hires something to do until a senior partner intervenes and says, “Get billable or get gone, you very special buttercup.” Non-billable and a future at McKinsey are not like peanut butter and jelly.
How did McKinsey characterize its 12-month journey into the reality of consulting baloney? The answer is a great one. Here it is:
With so many challenges and the need to work in a fundamentally new way, we described ourselves as riding the “struggle bus.”
Did the McKinsey workers break out into work songs to make the drudgery of deciphering PowerPoints go more pleasantly? I am think about the Coal Miners Boogie by George Davis, West Virginia Mine Disaster by Jean Ritchi, or my personal favorite Black Dust Fever by the Wildwood Valley Boys.
But the workers bringing brain to reality learned five lessons. One can, I assume, pay McKinsey to apply these lessons to a client firm experiencing a mental high from thinking about the payoffs from AI. On the other hand, consider these in this free blog post with my humble interpretation:
- Define a shared aspiration. My version: Figure out what you want to do. Get a plan. Regroup if the objective and the method don’t work or make much sense.
- Assemble a multi-disciplinary team. My version: Don’t load up on MBAs. Get individuals who can code, analyze content, and tap existing tools to accomplish specific tasks. Include an old geezer partner who can “explain” what McKinsey means when it suggests “managerial evolution.” Skip the ape to MBA cartoons.
- Put the user first. My version: Some lesser soul will have to use the system. Make sure the system is usable and actually works. Skip the minimum viable product and get to the quality of the output and the time required to use the system or just doing the work the old-fashioned way.
- Tech, learn, repeat. Covert the random walk into a logical and efficient workflow. Running around with one’s hair on fire is not a methodical process nor a good way to produce value.
- Measure and manage. My version: Fire those who failed. Come up with some verbal razzle-dazzle and sell the planning and managing work to a client. Do not do this work on overhead for the consultants who are billable.
What does the great reveal by McKinsey tell me. First, the baloney about “saving an average of up to 30 percent of a consultants’ time by streamlining information gathering and synthesis” sounds like the same old, same old pitched by enterprise search vendors for decades. The reality is that online access to information does not save time; it creates more work, particularly when data voids are exposed. Those old dog partners are going to have to talk with young consultants. No smart software is going to eliminate that task no matter how many senior partners want a silver bullet to kill the beast of a group of beginners.
The second “win” is the idea that “insights are better.” Baloney. Flipping through the famous executive memos to a client, reading the reports with the unaesthetic dash points, and looking at the slide decks created by coal miners of knowledge years ago still has to be done… by a human who is sober, motivated, and hungry for peer recognition. Software is not going to have the same thirst for getting a pat on the head and in some cases on another part of the human frame.
The struggle bus is loading up no. Just hire McKinsey to be the driver, the tour guide, and the outfit that collects the fees. One can convert failure into billability. That’s what the Fast Company write up proves. Eleven months and all they got was a ride on the digital equivalent of the Cybertruck which turned out to be much-hyped struggle bus?
AI may ultimately rule the world. For now, it simply humbles the brilliant minds at McKinsey and generates a story for Fast Company. Well, that’s something, isn’t it? Now about spinning that story.
Stephen E Arnold, June 12, 2024
Will the Judge Notice? Will the Clients If Convicted?
June 12, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
Law offices are eager to lighten their humans’ workload with generative AI. Perhaps too eager. Stanford University’s HAI reports, “AI on Trial: Legal Models Hallucinate in 1 out of 6 (or More) Benchmarking Queries.” Close enough for horseshoes, but for justice? And that statistic is with improved, law-specific software. We learn:
“In one highly-publicized case, a New York lawyer faced sanctions for citing ChatGPT-invented fictional cases in a legal brief; many similar cases have since been reported. And our previous study of general-purpose chatbots found that they hallucinated between 58% and 82% of the time on legal queries, highlighting the risks of incorporating AI into legal practice. In his 2023 annual report on the judiciary, Chief Justice Roberts took note and warned lawyers of hallucinations.”
But that was before tailor-made retrieval-augmented generation tools. The article continues:
“Across all areas of industry, retrieval-augmented generation (RAG) is seen and promoted as the solution for reducing hallucinations in domain-specific contexts. Relying on RAG, leading legal research services have released AI-powered legal research products that they claim ‘avoid’ hallucinations and guarantee ‘hallucination-free’ legal citations. RAG systems promise to deliver more accurate and trustworthy legal information by integrating a language model with a database of legal documents. Yet providers have not provided hard evidence for such claims or even precisely defined ‘hallucination,’ making it difficult to assess their real-world reliability.”
So the Stanford team tested three of the RAG systems for themselves, Lexis+ AI from LexisNexis and Westlaw AI-Assisted Research & Ask Practical Law AI from Thomson Reuters. The authors note they are not singling out LexisNexis or Thomson Reuters for opprobrium. On the contrary, these tools are less opaque than their competition and so more easily examined. They found that these systems are more accurate than the general-purpose models like GPT-4. However, the authors write:
“But even these bespoke legal AI tools still hallucinate an alarming amount of the time: the Lexis+ AI and Ask Practical Law AI systems produced incorrect information more than 17% of the time, while Westlaw’s AI-Assisted Research hallucinated more than 34% of the time.”
These hallucinations come in two flavors. Many responses are flat out wrong. Others are misgrounded: they are correct about the law but cite irrelevant sources. The authors stress this second type of error is more dangerous than it may seem, for it may lure users into a false sense of security about the tool’s accuracy.
The post examines challenges particular to RAG-based legal AI systems and discusses responsible, transparent ways to use them, if one must. In short, it recommends public benchmarking and rigorous evaluations. Will law firms listen?
Cynthia Murrell, June 12, 2024
Will AI Kill Us All? No, But the Hype Can Be Damaging to Mental Health
June 11, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
I missed the talk about how AI will kill us all. Planned? Nah, heavy traffic. From what I heard, none of the cyber investigators believed the person trying hard to frighten law enforcement cyber investigators. There are other — slightly more tangible threats. One of the attendees whose name I did not bother to remember asked me, “What do you think about artificial intelligence?” My answer was, “Meh.”
A contrarian walks alone. Why? It is hard to make money being negative. At the conference I attended June 4, 5, and 6, attendees with whom I spoke just did not care. Thanks, MSFT Copilot. Good enough.
Why you may ask? My method of handling the question is to refer to articles like this: “AI Appears to Rapidly Be Approaching Be Approaching a Brick Wall Where It Can’t Get Smarter.” This write up offers an opinion not popular among the AI cheerleaders:
Researchers are ringing the alarm bells, warning that companies like OpenAI and Google are rapidly running out of human-written training data for their AI models. And without new training data, it’s likely the models won’t be able to get any smarter, a point of reckoning for the burgeoning AI industry
Like the argument that AI will change everything, this claim applies to systems based upon indexing human content. I am reasonably certain that more advanced smart software with different concepts will emerge. I am not holding my breath because much of the current AI hoo-hah has been gestating longer than new born baby elephant.
So what’s with the doom pitch? Law enforcement apparently does not buy the idea. My team doesn’t. For the foreseeable future, applied smart software operating within some boundaries will allow some tasks to be completed quickly and with acceptable reliability. Robocop is not likely for a while.
One interesting question is why the polarization. First, it is easy. And, second, one can cash in. If one is a cheerleader, one can invest in a promising AI start and make (in theory) oodles of money. By being a contrarian, one can tap into the segment of people who think the sky is falling. Being a contrarian is “different.” Plus, by predicting implosion and the end of life one can get attention. That’s okay. I try to avoid being the eccentric carrying a sign.
The current AI bubble relies in a significant way on a Google recipe: Indexing text. The approach reflects Google’s baked in biases. It indexes the Web; therefore, it should be able to answer questions by plucking factoids. Sorry, that doesn’t work. Glue cheese to pizza? Sure.
Hopefully new lines of investigation may reveal different approaches. I am skeptical about synthetic (or made up data that is probably correct). My fear is that we will require another 10, 20, or 30 years of research to move beyond shuffling content blocks around. There has to be a higher level of abstraction operating. But machines are machines and wetware (human brains) are different.
Will life end? Probably but not because of AI unless someone turns over nuclear launches to “smart” software. In that case, the crazy eccentric could be on the beam.
Stephen E Arnold, June 11, 2024
AI and Ethical Concerns: Sure, When “Ethics” Means Money
June 11, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
It seems workers continue to flee OpenAI over ethical concerns. The Byte reports, “Another OpenAI Researcher Quits, Issuing Cryptic Warning.” Understandably unwilling to disclose details, policy researcher Gretchen Kreuger announced her resignation on X. She did express a few of her concerns in broad strokes:
“We need to do more to improve foundational things, like decision-making processes; accountability; transparency; documentation; policy enforcement; the care with which we use our own technology; and mitigations for impacts on inequality, rights, and the environment.”
Kreuger emphasized these important issues not only affect communities now but also influence who controls the direction of pervasive AI systems in the future. Right now, that control is in the hands of the tech bros running AI firms. Writer Maggie Harrison Dupré notes Krueger’s departure comes as OpenAI is dealing with a couple of scandals. Other high-profile resignations have also occurred in recent months. We are reminded:
“[Recent] departures include that of Ilya Sutskever, who served as OpenAI’s chief scientist, and Jan Leike, a top researcher on the company’s now-dismantled ’Superalignment’ safety team — which, in short, was the division effectively in charge of ensuring that a still-theoretical human-level AI wouldn’t go rogue and kill us all. Or something like that. Sutskever was also a leader within the Superalignment division. And to that end, it feels very notable that all three of these now-ex-OpenAI workers were those who worked on safety and policy initiatives. It’s almost as if, for some reason, they felt as though they were unable to successfully do their job in ensuring the safety and security of OpenAI’s products — part of which, of course, would reasonably include creating pathways for holding leadership accountable for their choices.”
Yes, most of us would find that reasonable. For members of that leadership, though, it seems escaping accountability is a top priority.
Cynthia Murrell, June 11, 2024
AI May Not Be Magic: The Salesforce Signal
June 10, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
Salesforce has been a steady outfit. However, the company suffered a revenue miss, its first in about a quarter century. The news reports cited broad economic factors like “macro headwinds.” Salesforce, according to the firm’s chief people officer, the company has been experimenting with AI for “over a decade.” But the magic of AI was not able to ameliorate the company’s dip or add some chrome trim to its revenue guidance.
John Milton’s god character from Paradise Lost watches the antics of super-sophisticated artificial intelligence algorithms. This character quickly realizes that zeros and ones crafted by humans and enhanced by smart machines is definitely not omniscient, omnipresent, and omnipotent character who knows everything before it happens no matter what the PR firms or company spokesperson asserts. Thanks, MSFT Copilot. Good enough.
Piecing together fragments of information, it appears that AI has added to the company’s administrative friction. In a Fortune interview, recycled for MSN.com, consider these administrative process examples:
- The company has deployed 50 AI tools.
- Salesforce has an AI governance council.
- There is an Office of Ethical and Humane Use, started in 2019.
- Salesforce uses surveys to supplement its “robust listening strategies.”
- There are phone calls and meetings.
Some specific uses of AI appear to address inherent design constraints in Salesforce software; for example, AI has:
saved employees 50,000 hours within one business quarter, and the bot answered nearly 370,000 employee questions, according to the company. Merging into Project Basecamp, the company’s project management platform, has resolved 88,000 worker requests, speeding up issue resolution from an average of 48 hours to just 30 minutes.
What’s the payoff to the bottom line? That information is scant. What we know is that Salesforce may not be benefiting from the additional AI investment or the friction AI’s bureaucratic processes imposes on the company.
What’s this mean for those who predict that AI will change everything? I continue to think about the two ends of the spectrum: Go fast and break things crowd and the stop AI contingent.
First, the type of AI which is the one that does high school essay writing is easy to understand. These systems work as long as the subject matter clumps into piles of factoids which limit the craziness of the algorithms’ outputs. The topic “How to make a taco” is nailed down. The topic “How to decrypt Telegram’s encryption system” is not. Big brains can explain why the taco question is relatively hallucination free but not why the Telegram question generates useless drivel. I have, therefore, concluded, “Limited, narrow domain questions are okay for AI.”
Second, the current systems are presented as super wonderful. An example is the steady flow of PR about Google DeepMind’s contributions to biological science. Yet Google’s search system generates baloney. I think the different is that whacking away at proteins is a repetitive combinatorial problem. Calling the methods AI is similar to describing Daylight Chemical Information Systems a manifestation of the Oracle at Delphi is hogwash. PR erases important differences in critical lines of research. Does Google DeepMind feel shame? Let’s ask IBM Watson. That will be helpful. PR has a role; it is not AI.
Third, the desire for a silver bullet is deep-seated in many Peter Principle managers. These “leaders” of “leadership teams” don’t know what to do. Managing becomes figuring out risks. AI has legs, so let’s give that pony a chance to win the cart race. But pony cart races are trivial. The real races require winning three competitions. Few horses pull of that trick. I watch in wonder the launch, retreat, PR explanation, and next launch of some AI outfits. The focus seems to be on getting $20 per month. Degrading the service. Asking for more money. Then repeat.
The lack of AI innovation is becoming obvious. From the starter’s gun cracking in time with Microsoft’s AI announcement in January 2023 how much progress has been made?
We have the Salesforce financial report. We have the management craziness at OpenAI. We have Microsoft investing in or partnering with a number of technology outfits, including one in Paris. We have Google just doddering and fumbling. We have lawsuits. We have craziness like Adobe’s “owning” any image created with its software. We have start ups which bandy about the term “AI” like a shuttlecock at a high school in India badminton league. We have so many LinkedIn AI experts, I marvel that no one pins these baloney artists to a piece of white bread. We have the Dutch police emphasizing home-grown AI which helped make sense of the ANOM phone stings when the procedures are part of most policeware systems. Statistics, yes. AI, no. Clustering, yes. AI, no. Metadata assignment, yes. AI, no. The ANOM operation took place about 2017 to its shut down four years later. AI? Nope.
What does the lack of financial payoff and revenue generating AI solutions tell me? My answer to this question is:
- The cost of just using and letting prospects use an AI system are high. Due to the lack of a Triple Crown contender, no company has the horse or can afford the costs of getting the nag ready to race and keeping the animal from keeling over dead.
- The tangible results are tough to express. Despite the talk about reducing the costs of customer service, the cost of the AI system and the need to have humans ride herd on what the crazed cattle-like algorithms yield is not evident to me. The Salesforce experience is that AI cannot fix or make the Slack system generate oodles of cost savings or revenues from new, happy customers.
- The AI systems, particularly the services promoted via Product Hunt, are impossible for me to differentiate. Some do images, but the functions are similar. Some AI system do text things. Okay. But what’s new? Money is being spent to produce endless variations and me-too services. Fun for some. But boring and a waste of time to a dinobaby like me.
Net net: With economic problems growing in numerous sectors, those with money or a belief that garlic will kill Count Vampire, Baron of Revenue Loss are in for a surprise. Sorry. No software equivalent to Milton’s eternal, all-knowing, omnipotent God. I won’t tell the PR people. That Salesforce signal is meaningful.
Stephen E Arnold, June 10, 2024
Publishers Sign Up for the Great Unknown: Risky, Oh, Yeah
June 7, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
OpenAI is paying for content. Why? Maybe to avoid lawsuits? Maybe to get access to “real” news to try to get ahead of its perceived rivals? Maybe because Sam AI-Man pushes forward while its perceived competitors do weird things like add features, launch services which are lousy, or which have the taste of the bitter fruit of Zuckus nepenthes.
Publishers are like beavers. Publishers have to do whatever they can to generate cash. Thanks, MSFT Copilot. Good enough. Not a cartoon and not a single dam, but just like MSFT security good enough, today’s benchmark of excellence.
“Journalists Deeply Troubled by OpenAI’s Content Deals with Vox, The Atlantic” is a good example of the angst Sam AI-Man is causing among “real” news outfits and their Fourth Estate professionals. The write up reports:
“Alarmed” writers unions question transparency of AI training deals with ChatGPT maker.
Oh, oh. An echo of Google’s Code Red am I hearing? No, what I hear is the ka-ching of the bank teller’s deposit system as the “owner” of the Fourth Estate professional business process gets Sam AI-Man’s money. Let’s not confuse “real” news with “real” money, shall we? In the current economic climate, money matters. Today it is difficult to sell advertising unless one is a slam dunk monopoly with an ad sales system that is tough to beat. Today it is tough to get those who consume news via a podcast or a public Web site to subscribe. I think that the number I heard for conversions is something like one or two subscribers per 100 visitors on a really good day. Most days are not really good.
“Real” journalists can be unionized. The idea is that their services have to be protected from the lawyers and bean counters who run many high profile publishing outfit. The problem with unions is that these seek to limit what the proprietors can do in a largely unregulated capitalist set up like the one operating within the United States. In a long-forgotten pre-digital era, those in a union dust up in 1921 at Blair Mountain in my favorite state, West Virginia. Today, the union members are more likely to launch social media posts and hook up with a needy lawyering outfit.
Let me be clear. Some of the “real” journalists will find fame as YouTubers, pundits on what’s left of traditional TV or cable news programs, or by writing a book which catches the attention of Netflix. Most, however, will do gig work and migrate to employment adjacent to “real” news. The problem is that in any set of “real” journalists, the top 10 percent will be advantaged. The others may head to favelas, their parent’s basement, or a Sheetz parking lot in my favorite state for some chemical relief. Does that sound scary?
Think about this.
Sam AI-Man, according to the Observer’s story “Sam Altman Says OpenAI Doesn’t Fully Understand How GPT Works Despite Rapid Progress.” These money-focused publishers are signing up for something that not only do they not understand but the fellow who is surfing the crazy wave of smart software does not understand. But taking money and worrying about the future is not something publishing executives in their carpetlands think about. Money in hand is good. Worrying about the future, according to their life coach, is not worth the mental stress. It is go-go in a now-now moment.
I cannot foretell the future. If I could, I would not be an 80-year-old dinobaby sitting in my home office marveling at the downstream consequences of what amounts to a 2024 variant of the DR-LINK technology. I can offer a handful of hypotheses:
- “Real” journalists are going to find that publishers cut deals to get cash without thinking of the “real” journalists or the risks inherent in hopping in a small cabin with Sam AI-Man for a voyage in the unknown.
- Money and cost reductions will fuel selling information to Sam AI-Man and any other Big Tech outfit which comes calling with a check book. Money now is better than looking at a graph of advertising sales over the last five years. Money trumps “real” journalists’ complaints when they are offered part-time work or an opportunity to find their future elsewhere.
- Publishing outfits have never been technology adept, and I think that engineered blindness is now built into the companies’ management processes. Change is going to make publishing an interesting business. That’s good for consultants and bankruptcy specialists. It will not be so good for those who do not have golden parachutes or platinum flying cars.
Net net: What are the options for the “real” journalists’ unions? Lawyers, maybe. Social media posts. Absolutely. Will these prevent publishers from doing what publishers have to do? Nope.
Stephen E Arnold, June 7, 2024
Meta Deletes Workplace. Why? AI!
June 7, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
Workplace was Meta’s attempt to jump into the office-productivity ring and face off against the likes of Slack and MS Teams. It did not fare well. Yahoo Finance shares the brief write-up, “Meta Is Shuttering Workplace, Its Enterprise Version of Facebook.” The company is spinning the decision as a shift to bigger and better things. Bloomberg’s Kurt Wagner cites reporting from TechCrunch as she writes:
“The service operated much like the original Facebook social network, but let people have separate accounts for their work interactions. Workplace had as many as 7 million total paying subscribers in May 2021. … Meta once had ambitious plans for Workplace, and viewed it as a way to make money through subscriptions as well as a chance to extend Facebook’s reach by infusing the product into work and office settings. At one point, Meta touted a list of high-profile customers, including Starbucks Corp., Walmart Inc. and Spotify Technology SA. The company will continue to focus on workplace-related products, a spokesperson said, but in other areas, such as the metaverse by building features for the company’s Quest VR headsets.”
The Meta spokesperson repeated the emphasis on those future products, also stating:
“We are discontinuing Workplace from Meta so we can focus on building AI and metaverse technologies that we believe will fundamentally reshape the way we work.”
Meta will continue to use Workplace internally, but everyone else has until the end of August 2025 before the service ends. Meta plans to keep user data accessible until the end of May 2026. The company also pledges to help users shift to Zoom’s Workvivo platform. What, no forced migration into the Metaverse and their proprietary headsets? Not yet, anyway.
Cynthia Murrell, June 7, 2024
AI in the Newsroom
June 7, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
It seems much of the news we encounter is already, at least in part, generated by AI. Poynter discusses how “AI Is Already Reshaping Newsrooms, AP Study Finds.” The study asked 292 respondents from legacy media, public broadcasters, magazines, and other news outlets. Writer Alex Mahadevan summarizes:
“Nearly 70% of newsroom staffers from a variety of backgrounds and organizations surveyed in December say they’re using the technology for crafting social media posts, newsletters and headlines; translation and transcribing interviews; and story drafts, among other uses. One-fifth said they’d used generative AI for multimedia, including social graphics and videos.”
Surely these professionals are only using these tools under meticulous guidelines, right? Well, a few are. We learn:
“The tension between ethics and innovation drove Poynter’s creation of an AI ethics starter kit for newsrooms last month. The AP — which released its own guidelines last August — found less than half of respondents have guidelines in their newsrooms, while about 60% were aware of some guidelines about the use of generative AI.”
The survey found the idea of guidelines was not even on most respondents’ minds. That is unsettling. Mahadevan lists some other interesting results:
“*54% said they’d ‘maybe’ let AI companies train their models using their content.
*49% said their workflows have already changed because of generative AI.
*56% said the AI generation of entire pieces of content should be banned.
*Only 7% of those who responded were worried about AI displacing jobs.
*18% said lack of training was a big challenge for ethical use of AI. ‘Training is lovely, but time spent on training is time not spent on journalism — and a small organization can’t afford to do that,’ said one respondent.”
That last statement is disturbing, given the gradual deterioration and impoverishment of large news outlets. How can we ensure best practices make their way into this mix, and can it be done before any news may be fake news?
Cynthia Murrell, June 7, 2024