Dexa: A New Podcast Search Engine
May 21, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
Google, Bing, and DuckDuckGo (a small percentage) dominate US search. Spotify, Apple Podcasts, and other platforms host and aggregate podcast shows. The problem is neither the twain shall meet when people are searching for video or audio content. Riley Tomasek was inspired by the problem and developed the Deva app:
“Dexa is an innovative brand that brings the power of AI to your favorite podcasts. With Dexa’s AI-powered podcast assistants, you can now explore, search, and ask questions related to the knowledge shared by trusted creators. Whether you’re curious about sleep supplements, programming languages, growing an audience, or achieving financial freedom, Dexa has you covered. Dexa unlocks the wisdom of experts like Andrew Huberman, Lex Fridman, Rhonda Patrick, Shane Parrish, and many more.
With Dexa, you can explore the world of podcasts and tap into the knowledge of trusted creators in a whole new way.”
Alex Huberman of Huberman Labs picked up the app and helped it go viral.
From there the Deva team built an intuitive, complex AI-powered search engine that indexes, analyzes, and transcribes podcasts. Since Deva launched nine months ago it has 50,000 users, answered almost one million, and partnered with famous podcasters. A recent update included a chat-based interface, more search and discover options, and ability watch referenced clips in a conversation.
Deva has raised $6 million in seed money and an exclusive partnership with Huberman Lab.
Deva is still a work in progress but it responds like ChatGPT but with a focus of conveying information and searching for content. It’s an intuitive platform that cites its sources directly in the search. It’s probably an interface that will be adopted by other search engines in the future.
Whitney Grace, May 21, 2024
AI and Work: Just the Ticket for Monday Morning
May 20, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
Well, here’s a cheerful essay for the average worker in a knowledge industry. “If Your Work’s Average, You’re Screwed It’s Over for You” is the ideal essay to kick off a new work week. The source of the publication is Digital Camera World. I thought traditional and digital cameras were yesterday’s news. Therefore, I surmise the author of the write up misses the good old days of Kodak film, chemicals, and really expensive retouching.
How many US government professionals will find themselves victims of good enough AI? Answer: More than than the professional photographers? Thanks, MSFT Copilot. Good enough, a standard your security systems seem to struggle to achieve.
What’s the camera-focuses (yeah, lame pun) essay report. Consider this passage:
there’s one thing that only humans can do…
Okay, one thing. I give up. What’s that? Create other humans? Write poetry? Take fentanyl and lose the ability to stand up for hours? Captain a boat near orcas who will do what they can to sink the vessel? Oh, well. What’s that one thing?
"But I think the thing that AI is going to have an impossible job of achieving is that last 1% that stands between everything [else] and what’s great. I think that that last 1%, only a human can impart that.
AI does the mediocre. Humans, I think, do the exceptional. The logic seems to point to someone in the top tier of humans will have a job. Everyone else will be standing on line to get basic income checks, pursuing crime, or reading books. Strike that. Scrolling social media. No doom required. Those not in the elite will know doom first hand.
Here’s another passage to bring some zip to a Monday morning:
What it’s [smart software] going to do is, if your work’s average, you’re screwed. It’s [having a job] over for you. Be great, because AI is going to have a really hard time being great itself.
Observations? Just that cost cutting may be Job One.
Stephen E Arnold, May 20, 2024
Hoot Hoot Hoot: A Xoogler Pushes the Help Button
May 20, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
The Daily Express US (?) published a remarkable story: “Former Google VP Issues Horror AI Warning As Technology Set to Leave Millions Jobless.” That’s a catchy assertion. Who is the Xoogler (that’s a former Googler for those who don’t know) that is mashing the Redder Alert siren? It is Geoffrey Hinton, who is a Big Wheel in the Land of AI.
Like a teacher with an out-of-control class, help is needed. Unfortunately pressing the big red button is performative. It is too late to get the class under control. Does AI behave like these kids? Thanks, MSFT Copilot. Good enough.
He believes that some entity has to provide a universal basic income to those people who are unable to find work because AI ate their jobs. The acronym UBI in the vernacular of a dinobaby means welfare. But those younger than I will interpret the UBI idea as something that “they” must provide.
The write up quotes the computer and AI wizard as opining:
"If you pay everybody a universal basic income, that solves the problem of them starving and not being able to pay the rent but that doesn’t solve the self-respect problem."
I like the reference to self-respect. I have not encountered too many examples in the last day or so. I have choked off the flood of “information” about the assorted trials of a former elected official, the hooligan trashing of Macy stores, and the arrest and un-arrest of a certain celebrity golfer. That’s enough of the self-respect thing for me.
The write up continues:
He added: "I am very worried about AI taking over lots of mundane jobs. That should be a good thing. It’s going to lead to a big increase in productivity, which leads to a big increase in wealth, and if that wealth was equally distributed that would be great, but it’s not going to be. "In the systems we live in, that wealth is going to go to the rich and not to the people whose jobs get lost, and that’s going to be very bad for society, I believe. It’s going to increase the gap between rich and poor, which increases the chances of right-wing populists getting elected."
Okay, that’s an interesting moment of insight from one of the people who assisted in the creation of this sprint to societal change.
I find it interesting that technology marches forward in a way that prevents smart people from peering down the road from a vantage point defined by their computer monitor and lab partners. The bird’s-eye view of a technology like AI is of interest only when the individual steps away from a Google-type outfit.
AI can hallucinate. I think it is clear that the wizards “inventing” smart software also hallucinate within their digital constructs.
What happens when the hallucinogenic wears off? For Dr. Hinton it is time to call for help. I assume the UBI help will arrive from “the government.” Will “the government” listen, get organized, and take action. Dr. Hinton, like some smart software, might be experiencing what some of his AI colleagues call hallucinating. Am I surprised? Nope. Wizards are quirky.
Stephen E Arnold, May 20, 2024
Germany Has Had It with Some Microsoft Products
May 20, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
Can Schleswig-Holstein succeed where Munich and Lower Saxony failed? Those two German states tried switching their official IT systems from Microsoft to open source software but were forced to reverse course. Emboldened by Microsoft’s shove to adopt Windows 11 and Office 365, informed by its neighbors’ defeats, and armed with three years of planning, Germany’s northernmost state is forging ahead. The Register frames the initiative as an epic battle in, “Open Source Versus Microsoft: The New Rebellion Begins.”
With cries of “Digital Sovereignty,” Schleswig-Holstein shakes its fist at its corporate overlord. Beginning with the aptly named LibreOffice suite, these IT warriors plan to replace Microsoft products top to bottom with open source alternatives. Writer Rupert Goodwins notes open source software has improved since Munich and Lower Saxony were forced to retreat, but will that be enough? He considers:
“Microsoft has a lot of cards to play here. Schleswig-Holstein will have to maintain compatibility with Windows within its own borders, with the German federation, with Europe, and the rest of the world. If a change to Windows happens to break that compatibility, guess who picks up the pain and the bills. Microsoft wouldn’t dream of doing that deliberately, no matter how high the stakes, yet these things happen. Freedom to innovate, don’t you know. If in five years the transition is a success, the benefits to the state, the people, and open source will be immeasurable. As well as bringing data protection back to those charged with providing it, it will give European laws new teeth. It will increase expertise, funding, and opportunities for open source. Schleswig-Holstein itself will become a new hub of technical excellence in an area that intensely interests the rest of the world, in public and private organizations. Microsoft cannot afford to let this happen. Schleswig-Holstein cannot back down, now it’s made it a battle for independence.”
See the write-up for more warfare language as well as Goodwins’ likening of user agreements to the classic suzerain-vassal relationship. Will Schleswig-Holstein emerge victorious, or will mighty Microsoft prevail? Governments depend on Microsoft. The US is now putting pressure on the Softies to do something more than making Windows 11 more annoying and creating a Six Flags Over Cyber Crime with their security methods. Will anything change? Nah.
Cynthia Murrell, May 22, 2024
Googzilla Versus OpenAI: Moving Up to Pillow Fighting
May 17, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
Mike Tyson is dressed in a Godzilla outfit. He looks like a short but quite capable Googzilla. He is wearing a Google hat. (I have one, but it is soiled. Bummer.) Googzilla is giving the stink eye to Sam AI-Man, who has followed health routines recommended by Huberman Lab and Anatoly, the fellow who hawks supplements after shaming gym brutes dressed as a norm core hero.
Sam AI-Man asks an important question. Googzilla seems to be baffled. But the cane underscores that he is getting old for a thunder lizard selling online advertising. Thanks, MSFT Copilot. How are the security initiatives coming along? Oh, too bad.
Now we have the first exhibition: Googzilla is taking on Sam AI-Man.
I read an analysis of this high-stakes battle in “ChatGPT 4o vs Gemini 1.5 Pro: It’s Not Even Close.” The article appeared in the delightfully named online publication “Beebom.” I am writing in Beyond Search, which is — quite frankly — a really boring name. But I am a dinobaby, and I am going to assume that Beebom has a much more tuned in owner operator.
The article illustrates a best practice in database comparison, just tweaked to provide some insights into how alike or different the Googzilla is from the AI-Man. There’s a math test. There is a follow the instructions query. There is an image test. A programming challenge. You get the idea. The article includes what a reader will need to run similar brain teasers to Googzilla and Sam AI-Man.
Who cares? Let’s get to the results.
The write up says:
It’s evidently clear that Gemini 1.5 Pro is far behind ChatGPT 4o. Even after improving the 1.5 Pro model for months while in preview, it can’t compete with the latest GPT-4o model by OpenAI. From commonsense reasoning to multimodal and coding tests, ChatGPT 4o performs intelligently and follows instructions attentively. Not to miss, OpenAI has made ChatGPT 4o free for everyone.
Welp. This statement is not going to make Googzilla happy. Anyone who plays Foosball with the beastie today will want to be alert that re-Fooses are not allowed. You lose when you what the ball out of the game.
But the sun has not set over the Googzilla computer lab. The write up opines:
The only thing going for Gemini 1.5 Pro is the massive context window with support for up to 1 million tokens. In addition, you can upload videos too which is an advantage. However, since the model is not very smart, I am not sure many would like to use it just for the larger context window.
I chuckled at the last line of the write up:
If Google has to compete with OpenAI, a substantial leap is required.
Several observations:
- Who knows the names of the “new” products Google rolled out?
- With numerous “new” products, has Google a grand vision or is it one of those high school stunts in which passengers in a luxury car jump out and run around the car shouting. Then the car drives off?
- Will Google’s management align its AI with its staff management methods in the context of the regulatory scrutiny?
- Where’s DeepMind in this somewhat confusing flood of “new” smart products?
Net net: Google is definitely showing the results of having its wizards work under Code Red’s flashing lights. More pillow fights ahead. (Can you list the “new” products announced at Google I/O? Don’t worry. Neither can I.)
Stephen E Arnold, May 17, 2024
IBM: A Management Beacon Shines Brightly
May 17, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
To be frank, I don’t know if the write up called “IBM Sued Again for Alleged Discrimination. This Time Against White Males” is on the money. I don’t really care. The item is absolutely delicious. For context, older employees were given an opportunity to train their replacements and then find their future elsewhere. I think someone told me that was “age discrimination.” True or not, a couple of interesting Web sites disappeared. These reported on the hilarious personnel management policies in place at Big Blue during the sweep of those with silver hair. Hey, as a dinobaby, I know getting older adds a cost burden to outfits who really care about their employees. Plus, old employees are not “fast,” like those whip smart 24 year olds with fancy degrees and zero common sense. I understood the desire to dump expensive employees and find cheaper, more flexible workers. Anyone can learn PL/I, but only the young can embrace the intricacies of Squarespace.
Old geezers and dinobabies have no place on a team of young, bright, low wage athletes. Thanks, ChatGPT. Good enough in one try. Microsoft Copilot crashed. Well, MSFT is busy with security and AI or is it AI and security. I don’t know, do you?
The cited article reports:
The complaint claims that in the pursuit of greater racial and gender diversity within the Linux distro maker, Red Hat axed senior director Allan Kingsley Wood, an employee of eight years. According to the suit, that diversity, equity, and inclusion (DEI) initiative within Red Hat “necessitates prioritizing skin color and race as primary hiring factors,” and this, and not other factors, led to him being laid off. Basically, Wood claims he was unfairly let go for being a White man, rather for performance or the like, because Red Hat was focused on prioritizing in an unlawfully discriminatory fashion people of other races and genders to diversify its ranks.
The impact? The professional has an opportunity to explore the greenness on the side of the fence closer to the unemployment benefits claims office. The write up concludes this way:
It’s too early to tell how likely Wood is to succeed in his case. A 2020 lawsuit against Google on similar grounds didn’t even make it to court because the plaintiff withdrew. On the other hand, IBM has been settling age-discrimination claims left and right, so perhaps we’ll see that happen here. We’ve reached out to Red Hat and AFL for further comment on the impending court battle, and we’ll update if we hear back.
I will predict the future. The parties to this legal matter (assuming it is not settled among gentlemen) will not get back to the author of the news report. In my opinion, IBM remains a paragon of outstanding personnel management.
Stephen E Arnold, May 17, 2024
Allegations about Medical Misinformation Haunt US Tech Giants
May 17, 2024
Access to legal and safe abortions also known as the fight for reproductive rights are controversial issues in the United States and countries with large Christian populations. Opposition against abortions often spread false information about the procedure. They’re also known to spread misinformation about sex education, especially birth control. Mashable shares the unfortunate story that tech giants “Meta And Google Fuel Abortion Misinformation Across Africa, Asia, And Latin America, Report Finds.”
The Center for Countering Digital Hate (CCDH) and MSI Reproductive Choices (MSI) released a new report that found Meta and sometimes Google restricted abortion information and disseminated misinformation and abuse in Latin America, Asia, and Africa. Abortion providers are prevented placing ads globally on Google and Meta. Meta also earns revenue from anti-abortion ads bought in the US and targeted at the aforementioned areas.
MSI claims in the report that Meta removed or rejected its ads in Vietnam, Nigeria, Nepal, Mexica, Kenya, and Ghana because of “sensitive content.” Meta also has a blanket advertising restrictions on MSI’s teams in Vietnam and Nepal without explanation. Google blocked ads with the keyword “pregnancy options” in Ghana and MSI claimed they were banned from using that term in a Google Adwords campaign.
Google offered an explanation:
“Speaking to Mashable, Google representative Michael Aciman said, ‘This report does not include a single example of policy violating content on Google’s platform, nor any examples of inconsistent enforcement. Without evidence, it claims that some ads were blocked in Ghana for referencing ‘pregnancy options’. To be clear, these types of ads are not prohibited from running in Ghana – if the ads were restricted, it was likely due to our longstanding policies against targeting people based on sensitive health categories, which includes pregnancy.’”
Google and Meta have been vague and inconsistent about why they’re removing pregnancy option ads, while allowing pro-life groups the spread unchecked misinformation about abortion. Meta, Google, and other social media companies mine user information, but they do scant to protect civil liberties and human rights.
Organizations like MSI and CCDH are doing what they can to fight bad actors. It’s an uphill battle and it would be easier if social media companies helped.
Whitney Grace, May 17, 2024
Flawed AI Will Still Take Jobs
May 16, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
Shocker. Organizations are using smart software which is [a] operating in an way its creators cannot explain, [b] makes up information, and [c] appears to be dominated by a handful of “above the law” outfits. Does this characterization seem unfair? No, well, stop reading. If it seems anchored in reality, you may find my comments about jobs for GenX, GenY or GenWhy?, millennials, and Alphas (I think this is what marketers call wee lads and lasses) somewhat in line with the IMF’s view of AI.
The answer is, “Your daughter should be very, very intelligent and very, very good at an in-demand skill. If she is not, then it is doom scrolling for sure. Thanks, MSFT Copilot. Do your part for the good of mankind today.
“Artificial Intelligence Hitting Labour Forces Like a Tsunami – IMF Chief” screws up the metaphor. A tsunami builds, travels, dissipates. I am not sure what the headline writer thinks will dissipate in AI land. Jobs for sure. But AI seems to have some sticking power.
What does the IMF say? Here’s a bit of insight:
Artificial intelligence is likely to impact 60% of jobs in advanced economies and 40% of jobs around the world in the next two years…
So what? The IMF Big Dog adds:
“It could bring tremendous increase in productivity if we manage it well, but it can also lead to more misinformation and, of course, more inequality in our society.”
Could. I think it will but for those who know their way around AI and are in the tippy top of smart people. ATM users, TikTok consumers, and those who think school is stupid may not emerge as winners.
I find it interesting to consider what a two-tier society in the US and Western Europe will manifest. What will the people who do not have jobs do? Volunteer to work at the local animal shelter, pick up trash, or just kick back. Yeah, that’s fun.
What if one looks back over the last 50 years? When I grew up, my father had a job. My mother worked at home. I went to school. The text books were passed along year to year. The teachers grouped students by ability and segregated some students into an “advanced” track. My free time was spent outside “playing” or inside reading. When I was 15, I worked as a car hop. No mobile phones. No computer. Just radio, a record player, and a crappy black-and-white television which displayed fuzzy programs. The neighbors knew me and the other “kids.” From my eighth grade class, everyone went to college after high school. In my high school class of 1962, everyone was thinking about an advanced degree. Social was something a church sponsored. Its main feature was ice cream. After getting an advanced degree in 1965 I believe, I got a job because someone heard me give a talk about indexing Latin sermons and said, “We need you.” Easy.
A half century later, what is the landscape. AI is eliminating jobs. Many of these will be either intermediating jobs like doing email spam for a PR firm’s client or doing legal research. In the future, knowledge work will move up the Great Chain of Being. Most won’t be able to do the climbing to make it up to a rung with decent pay, some reasonable challenges, and a bit of power.
Let’s go back to the somewhat off-the-mark tsunami metaphor. AI is going to become more reliable. The improvements will continue. Think about what an IBM PC looked like in the 1980s. Now think about the MacBook Air you or your colleague has. They are similar but not equivalent. What happens when AI systems and methods keep improving? That’s tough to predict. What’s obvious is that the improvements and innovations in smart software are not a tsunami.
I liken it more like the continuous pressure in a petroleum cracking facility. Work is placed in contact with smart software, and stuff vaporizes. The first component to be consumed are human jobs. Next, the smart software will transform “work” itself. Most work is busy work; smart software wants “real” work. As long as the electricity stays on, the impact of AI will be on-going. AI will transform. A tsunami crashes, makes a mess, and then is entropified. AI is a different and much hardier development.
The IMF is on the right track; it is just not making clear how much change is now underway.
Stephen E Arnold, May 16, 2024
AI Delivers The Best of Both Worlds: Deception and Inaccuracy
May 16, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
Wizards from one of Jeffrey Epstein’s probes made headlines about AI deception. Well, if there is one institution familiar with deception, I would submit that the Massachusetts Institute of Technology might be considered for the ranking, maybe in the top five.
The write up is “AI Deception: A Survey of Examples, Risks, and Potential Solutions.” If you want summaries of the write up, you will find them in The Guardian (we beg for dollars British newspaper) and Science Alert. Before I offer my personal observations, I will summarize the “findings” briefly. Smart software can output responses designed to deceive users and other machine processes.
Two researchers at a big name university make an impassioned appeal for a grant. These young, earnest, and passionate wizards know their team can develop a lie detector for an artificial intelligence large language model. The two wizards have confidence in their ability, of course. Thanks, MSFT Copilot. Good enough, like some enterprise software’s security architecture.
If you follow the “next big thing” hoo hah, you know that the garden variety of smart software incorporates technology from outfits like Google. I have described Google as a “slippery fish” because it generates explanations which often don’t make sense to me. Using the large language model generative text systems can yield some surprises. These range from images which seem out of step with historical fact to legal citations that land a lazy lawyer (yes! alliteration) in a load of lard.
The MIT researcher has verified that smart software may emulate the outstanding ethical qualities of an engineer or computer scientist. Logic is everything. Ethics are not anything.
The write up says:
Deception has emerged in a wide variety of AI systems trained to complete a specific task. Deception is especially likely to emerge when an AI system is trained to win games that have a social element …
The domain of the investigation was games. I want to step back and ask, “If LLMs are not understood by their developers, how do we know if deception is hard wired into the systems or that the systems learn deception from their developers with a dusting of examples from the training data?”
The answer to the question is, “At this time, no one knows how these large-scale systems work. Even the “small” LLMs can prove baffling. We input our own data into Mistral and managed to obtain gibberish. Another go produced a system crash that required a hard reboot of the Mac we were using for the test.
The reality appears to be that probability-based systems do not follow the same rules as a human. With more and more humans struggling with old-school skills like readin’, writin’ and ‘rithmatic — most people won’t notice. For the top 10 percenters, the mistakes are amusing… sometimes.
The write up concludes:
Training models to be more truthful could also create risk. One way a model could become more truthful is by developing more accurate internal representations of the world. This also makes the model a more effective agent, by increasing its ability to successfully implement plans. For example, creating a more truthful model could actually increase its ability to engage in strategic deception by giving it more accurate insights into its opponents’ beliefs and desires. Granted, a maximally truthful system would not deceive, but optimizing for truthfulness could nonetheless increase the capacity for strategic deception. For this reason, it would be valuable to develop techniques for making models more honest (in the sense of causing their outputs to match their internal representations), separately from just making them more truthful. Here, as we discussed earlier, more research is needed in developing reliable techniques for understanding the internal representations of models. In addition, it would be useful to develop tools to control the model’s internal representations, and to control the model’s ability to produce outputs that deviate from its internal representations. As discussed in Zou et al., representation control is one promising strategy. They develop a lie detector and can control whether or not an AI lies. If representation control methods become highly reliable, then this would present a way of robustly combating AI deception.
My hunch is that MIT will be in the hunt for US government grants to develop a lie detector for AI models. It is also possible that Harvard’s medical school will begin work to determine where ethical behavior resides in the human brain so that can be replicated in one of the megawatt munching data centers some big tech outfits want to deploy.
Four observations:
- AI can generate what appears to be “accurate” information, but that information may be weaponized by a little-understood mechanism
- “Soft” human information like ethical behavior may be difficult to implement in the short term, if ever
- A lie detector for AI will require AI; therefore, how will an opaque and not understood system be designated okay to use? It cannot at this time
- Duplicity may be inherent in the educational institutions. Therefore, those affiliated with the institution may be duplicitous and produce duplicitous content. This assertion raises the question, “Whom can one trust in the AI development chain?
Net net: AI is hot because is a candidate for 2024’s next big thing. The “big thing” may be the economic consequences of its being a fairly small and premature thing. Incubator time?
Stephen E Arnold, May 16, 2024
Generative AI: Minor Value and Major Harms
May 16, 2024
Flawed though it is, generative AI has its uses. In fact, according to software engineer and Citation Needed author Molly White, AI tools for programming and writing are about as helpful as an intern. Unlike the average intern, however, AI supplies help with a side of serious ethical and environmental concerns. White discusses the tradeoffs in her post, “AI Isn’t Useless. But Is It Worth It?”
At first White was hesitant to dip her toes in the problematic AI waters. However, she also did not want to dismiss their value out of hand. She writes:
“But as the hype around AI has grown, and with it my desire to understand the space in more depth, I wanted to really understand what these tools can do, to develop as strong an understanding as possible of their potential capabilities as well as their limitations and tradeoffs, to ensure my opinions are well-formed. I, like many others who have experimented with or adopted these products, have found that these tools actually can be pretty useful for some tasks. Though AI companies are prone to making overblown promises that the tools will shortly be able to replace your content writing team or generate feature-length films or develop a video game from scratch, the reality is far more mundane: they are handy in the same way that it might occasionally be useful to delegate some tasks to an inexperienced and sometimes sloppy intern. Still, I do think acknowledging the usefulness is important, while also holding companies to account for their false or impossible promises, abusive labor practices, and myriad other issues. When critics dismiss AI outright, I think in many cases this weakens the criticism, as readers who have used and benefited from AI tools think ‘wait, that’s not been my experience at all’.”
That is why White put in the time and effort to run several AI tools through their paces. She describes the results in the article, so navigate there for those details. Some features she found useful. Others required so much review and correction they were more trouble than they were worth. Overall, though, she finds the claims of AI bros to be overblown and the consequences to far outweigh the benefits. So maybe hand that next mundane task to the nearest intern who, though a flawed human, comes with far less baggage than ChatGPT and friends.
Cynthia Murrell, May 16, 2024