Did AI Say, Smile and Pay Despite Bankruptcy
December 11, 2023
This essay is the work of a dumb dinobaby. No smart software required.
Going out of business is a painful event for [a] the whiz kids who dreamed up an idea guaranteed to baffle grandma, [b] the friends, family, and venture capitalists who funded the sure-fire next Google, and [c] the “customers” or more accurately the “users” who gave the product or service a whirl and some cash.
Therefore, one who had taken an entry level philosophy class when a sophomore might have brushed against the thorny bush of ethics. Some get scratched, emulate the folks who wore chains and sharpened nails under their Grieve St Laurent robes, and read medieval wisdom literature for fun. Others just dump that baloney and focus on figuring out how to exit Dodge City without a posse riding hard after them.
The young woman learns that the creditors of an insolvent firm may “sell” her account to companies which operate on a “pay or else” policy. Imagine. You have lousy teeth and you could be put in jail. Look at the bright side. In some nation states, prison medical services include dental work. Anesthetic? Yeah. Maybe not so much. Thanks, MSFT Copilot. You had a bit of a hiccup this morning, but you spit out a tooth with an image on it. Close enough.
I read “Smile Direct Club shuts down after Filing for Bankruptcy – What It Means for Customers.” With AI customer service solutions available, one would think that a zoom zoom semi-high tech outfit would find a way to handle issues in an elegant way. Wait! Maybe the company did, and this article documents how smart software may influence certain business decisions.
The story is simple. Smile Direct could not make its mail order dental business payoff. The cited news story presents what might be a glimpse of the AI future. I quote:
Smile Direct Club has also revealed its "lifetime smile guarantee" it previously offered was no longer valid, while those with payment plans set up are expected to continue making payments. The company has not yet revealed how customers can get refunds.
I like the idea that a “lifetime” is vague; therefore, once the company dies, the user is dead too. I enjoyed immensely the alleged expectation that customers who are using the mail order dental service — even though it is defunct and not delivering its “product” — will have to keep making payments. I assume that the friendly folks at online payment services and our friends at the big credit card companies will just keep doing the automatic billing. (Those payment institutions have super duper customer service systems in my experience. Yours, of course, may differ from mine.
I am looking forward to straightening out this story. (You know. Dental braces. Straightening teeth via mail order. High tech. The next Google. Yada yada.)
Stephen E Arnold, December 11, 2023
23andMe: Fancy Dancing at the Security Breach Ball
December 11, 2023
This essay is the work of a dumb dinobaby. No smart software required.
Here’s a story I found amusing. Very Sillycon Valley. Very high school science clubby. Navigate to “23andMe Moves to Thwart Class-Action Lawsuits by Quietly Updating Terms.” The main point of the write up is that the firm’s security was breached. How? Probably those stupid customers or a cyber security vendor installing smart software that did not work.
How some influential wizards work to deflect actions hostile to their interests. In the cartoon, the Big Dog tells a young professional, “Just change the words.” Logical, right? Thanks, MSFT Copilot. Close enough for horseshoes.
The article reports:
Following a hack that potentially ensnared 6.9 million of its users, 23andMe has updated its terms of service to make it more difficult for you to take the DNA testing kit company to court, and you only have 30 days to opt out.
I have spit in a 23andMe tube. I’m good at least for this most recent example of hard-to-imagine security missteps. The article cites other publications but drives home what I think is a useful insight into the thought process of big-time Sillycon Valley firms:
customers were informed via email that “important updates were made to the Dispute Resolution and Arbitration section” on Nov. 30 “to include procedures that will encourage a prompt resolution of any disputes and to streamline arbitration proceedings where multiple similar claims are filed.” Customers have 30 days to let the site know if they disagree with the terms. If they don’t reach out via email to opt out, the company will consider their silence an agreement to the new terms.
No more neutral arbitrators, please. To make the firm’s intentions easier to understand, the cited article concludes:
The new TOS specifically calls out class-action lawsuits as prohibited. “To the fullest extent allowed by applicable law, you and we agree that each party may bring disputes against the only party only in an individual capacity, and not as a class action or collective action or class arbitration” …
I like this move for three reasons:
- It provides another example of the tactics certain Information Highway contractors view the Rules of the Road. In a word, “flexible.” In another word, “malleable.”
- The maneuver is one that seems to be — how shall I phrase it — elephantine, not dainty and subtle.
- The “fix” for the problem is to make the estimable company less likely to get hit with massive claims in a court. Courts, obviously, are not to be trusted in some situations.
I find the entire maneuver chuckle invoking. Am I surprised at the move? Nah. You can’t kid this dinobaby.
Stephen E Arnold, December 11, 2023
Constraints Make AI More Human. Who Would Have Guessed?
December 11, 2023
This essay is the work of a dumb dinobaby. No smart software required.
AI developers could be one step closer at artificially recreating the human brain. Science Daily discusses a study from the University of Cambridge about, “AI System Self-Organizes To Develop Features of Brains Of Complex Organisms.” Neural systems are designed to organize, form connections, and balance an organism’s competing demands. They need energy and resources to grow an organism’s physical body, while they also optimize neural activity for information processing. This natural balance describes how animal brains have similar organizational solutions.
Brains are designed to solve and understand complex problems while exerting as little energy as possible. Biological systems usually evolve to maximize energy resources available to them.
“See how much better the output is when we constrain the smart software,” says the young keyboard operator. Thanks, MSFT Copilot. Good enough.
Scientists from the Medical Research Council Cognition and Brain Sciences Unit (MRC CBSU) at the University of Cambridge experimented with this concept when they made a simplified brain model and applied physical constraints. The model developed traits similar to human brains.
The scientists tested the model brain system by having it navigate a maze. Maze navigation was chosen because it requires various tasks to be completed. The different tasks activate different nodes in the model. Nodes are similar to brain neurons. The brain model needed to practice navigating the maze:
“Initially, the system does not know how to complete the task and makes mistakes. But when it is given feedback it gradually learns to get better at the task. It learns by changing the strength of the connections between its nodes, similar to how the strength of connections between brain cells changes as we learn. The system then repeats the task over and over again, until eventually it learns to perform it correctly.
With their system, however, the physical constraint meant that the further away two nodes were, the more difficult it was to build a connection between the two nodes in response to the feedback. In the human brain, connections that span a large physical distance are expensive to form and maintain.”
The physical constraints on the model forced its nodes to react and adapt similarly to a human brain. The implications for AI are that it could make algorithms process faster and more complex tasks as well as advance the evolution of “robot” brains.
Whitney Grace, December 11, 2023
Weaponizing AI Information for Rubes with Googley Fakes
December 8, 2023
This essay is the work of a dumb dinobaby. No smart software required.
From the “Hey, rube” department: “Google Admits That a Gemini AI Demo Video Was Staged” reports as actual factual:
There was no voice interaction, nor was the demo happening in real time.
Young Star Wars’ fans learn the truth behind the scenes which thrill them. Thanks, MSFT Copilot. One try and some work with the speech bubble and I was good to go.
And to what magical event does this mysterious statement refer? The Google Gemini announcement. Yep, 16 Hollywood style videos of “reality.” Engadget asserts:
Google is counting on its very own GPT-4 competitor, Gemini, so much that it staged parts of a recent demo video. In an opinion piece, Bloomberg says Google admits that for its video titled “Hands-on with Gemini: Interacting with multimodal AI,” not only was it edited to speed up the outputs (which was declared in the video description), but the implied voice interaction between the human user and the AI was actually non-existent.
The article makes what I think is a rather gentle statement:
This is far less impressive than the video wants to mislead us into thinking, and worse yet, the lack of disclaimer about the actual input method makes Gemini’s readiness rather questionable.
Hopefully sometime in the near future Googlers can make reality from Hollywood-type fantasies. After all, policeware vendors have been trying to deliver a Minority Report-type of investigative experience for a heck of a lot longer.
What’s the most interesting part of the Google AI achievement? I think it illuminates the thinking of those who live in an ethical galaxy far, far away… if true, of course. Of course. I wonder if the same “fake it til you make it” approach applies to other Google activities?
Stephen E Arnold, December 8, 2023
Google Smart Software Titbits: Post Gemini Edition
December 8, 2023
This essay is the work of a dumb dinobaby. No smart software required.
In the Apple-inspired roll out of Google Gemini, the excitement is palpable. Is your heart palpitating? Ah, no. Neither is mine. Nevertheless, in the aftershock of a blockbuster “me to” the knowledge shrapnel has peppered my dinobaby lair; to wit: Gemini, according to Wired, is a “new breed” of AI. The source? Google’s Demis Hassabis.
What happens when the marketing does not align with the user experience? Tell the hardware wizards to shift into high gear, of course. Then tell the marketing professionals to evolve the story. Thanks, MSFT Copilot. You know I think you enjoyed generating this image.
Navigate to “Google Confirms That Its Cofounder Sergey Brin Played a Key Role in Creating Its ChatGPT Rival.” That’s a clickable headline. The write up asserts: “Google hinted that its cofounder Sergey Brin played a key role in the tech giant’s AI push.”
Interesting. One person involved in both Google and OpenAI. And Google responding to OpenAI after one year? Management brilliance or another high school science club method? The right information at the right time is nine-tenths of any battle. Was Google not processing information? Was the information it received about OpenAI incorrect or weaponized? Now Gemini is a “new breed” of AI. The Verge reports that McDonald’s burger joints will use Google AI to “make sure your fries are fresh.”
Google has been busy in non-AI areas; for instance:
- The Register asserts that a US senator claims Google and Apple reveal push notification data to non-US nation states
- Google has ramped up its donations to universities, according to TechMeme
- Lost files you thought were in Google Drive? Never fear. Google has a software tool you can use to fix your problem. Well, that’s what Engadget says.
So an AI problem? What problem?
Stephen E Arnold, December 8, 2023
Safe AI or Money: Expert Concludes That Money Wins
December 8, 2023
This essay is the work of a dumb dinobaby. No smart software required.
I read “The Frantic Battle over OpenAI Shows That Money Triumphs in the End.” The author, an esteemed wizard in the world of finance and economics, reveals that money is important. Here’s a snippet from the essay which I found truly revolutionary, brilliant, insightful, and truly novel:
The academic wizard has concluded that a ball is indeed round. The world of geometry has been stunned. The ball is not just round. It exists as a sphere. The most shocking insight from the Ivory Tower is that the ball bounces. Thanks for the good enough image, MSFT Copilot.
But ever since OpenAI’s ChatGPT looked to be on its way to achieving the holy grail of tech – an at-scale consumer platform that would generate billions of dollars in profits – its non-profit safety mission has been endangered by big money. Now, big money is on the way to devouring safety.
Who knew?
The essay continues:
Which all goes to show that the real Frankenstein monster of AI is human greed. Private enterprise, motivated by the lure of ever-greater profits, cannot be relied on to police itself against the horrors that an unfettered AI will create. Last week’s frantic battle over OpenAI shows that not even a non-profit board with a capped profit structure for investors can match the power of big tech and Wall Street. Money triumphs in the end.
Oh, my goodness. Plato, Aristotle, and other mere pretenders to genius you have been put to shame. My heart is palpitating from the revelation that “money triumphs in the end.”
Stephen E Arnold, December 8, 2023
A Soft Rah Rah for a Professional Publisher
December 8, 2023
This essay is the work of a dumb dinobaby. No smart software required.
Predictive modeling and other AI capabilities have the potential to greatly accelerate scientific research. But since algorithmic research assistants are only as good as their data, time spent by humans rigorously sourcing the best data can cause a bottleneck. Now, reports New Zealand’s IT Brief, “Elsevier Launches ‘Datasets’ to Assist Research with Predictive AI Models.” Journalist Catherine Knowles writes:
“Elsevier, a global expert in scientific information and analytics, has launched Datasets, a new research product to assist a range of industries including life sciences, engineering, chemicals, and energy. The product utilizes generative AI and predictive analytics technologies, addressing the frequent challenge of data scientists having to dedicate significant time to source quality data for well-trained AI models. Datasets speeds up the digital transformation process by providing comprehensive, machine-readable data derived from trusted academic sources. With the ability to be fully integrated into private and secure computational ecosystems, its implementation helps safeguard intellectual property. The product aims to accelerate innovative thinking and business-critical decision-making processes in sectors heavy in research and development. Elsevier’s Datasets have a range of potential applications. These vary from determining the appropriate material for the development of a product by accessing sources such as Elsevier’s 271 million chemical substance records, to predicting drug efficacy and toxicity using advanced neural networks. Additionally, businesses can uncover company-wide expertise in specific disciplines through Elsevier’s 1.8 billion cited references and 17 million author profiles.”
This reminds us of the Scopus upgrade we learned about over the summer, but the write-up does not mention whether the projects are connected. We do learn Datasets can be incorporated into custom applications and third-party tools. If all goes well, this could be one AI application that actually contributes to society. Imagine that.
Cynthia Murrell, December 8, 2023
Big Tech, Big Fakes, Bigger Money: What Will AI Kill?
December 7, 2023
This essay is the work of a dumb dinobaby. No smart software required.
I don’t read The Hollywood Reporter. I did one job for a Hollywood big wheel. That was enough for me. I don’t drink. I don’t take drugs unless prescribed by my comic book addicted medical doctor in rural Kentucky. I don’t dress up and wear skin bronzers in the hope that my mobile will buzz. I don’t stay out late. I don’t fancy doing things which make my ethical compass buzz more angrily than my mobile phone. Therefore, The Hollywood Reporter does not speak to me.
One of my research team sent me a link to “The Rise of AI-Powered Stars: Big Money and Risks.” I scanned the write up and then I went through it again. By golly, The Hollywood Reporter hit on an “AI will kill us” angle not getting as much publicity as Sam AI-Man’s minimal substance interview.
Can a techno feudalist generate new content using what looks like “stars” or “well known” people? Probably. A payoff has to be within sight. Otherwise, move on to the next next big thing. Thanks, MSFT Copilot. Good enough cartoon.
Please, read the original and complete article in The Hollywood Reporter. Here’s the passage which rang the insight bell for me:
tech firms are using the power of celebrities to introduce the underlying technology to the masses. “There’s a huge possible business there and I think that’s what YouTube and the music companies see, for better or for worse
Let’s think about these statements.
First, the idea of consumerizing AI for the masses is interesting. However, I interpret the insight as having several force vectors:
- Become the plumbing for the next wave of user generated content (USG)
- Get paid by users AND impose an advertising tax on the USG
- Obtain real-time data about the efficacy of specific smart generation features so that resources can be directed to maintain a “moat” from would-be attackers.
Second, by signing deals with people who to me are essentially unknown, the techno giants are digging some trenches and putting somewhat crude asparagus obstacles where the competitors are like to drive their AI machines. The benefits include:
- First hand experience with the stars’ ego system responds
- The data regarding cost of signing up a star, payouts, and selling ads against the content
- Determining what push back exists [a] among fans and [b] the historical middlemen who have just been put on notice that they can find their future elsewhere.
Finally, the idea of the upside and the downside for particular entities and companies is interesting. There will be winners and losers. Right now, Hollywood is a loser. TikTok is a winner. The companies identified in The Hollywood Reporter want to be winners — big winners.
I may have to start paying more attention to this publication and its stories. Good stuff. What will AI kill? The cost of some human “talent”?
Stephen E Arnold, December 7, 2023
Will TikTok Go Slow in AI? Well, Sure
December 7, 2023
This essay is the work of a dumb dinobaby. No smart software required.
The AI efforts of non-governmental organizations, government agencies, and international groups are interesting. Many resolutions, proclamations, and blog polemics, etc. have been saying, “Slow down AI. Smart software will put people out of work. Destroy humans’ ability to think. Unleash the ‘I’ll be back guy.'”
Getting those enthusiastic about smart software is a management problem. Thanks, MSFT Copilot. Good enough.
My stance in the midst of this fearmongering has been bemusement. I know that predicting the future perturbations of technology is as difficult as picking a Kentucky Derby winner and not picking a horse that will drop dead during the race. When groups issue proclamations and guidelines without an enforcement mechanism, not much is going to happen in the restraint department.
I submit as partial evidence for my bemusement the article “TikTok Owner ByteDance Joins Generative AI Frenzy with Service for Chatbot Development, Memo Says.” What seems clear, if the write up is mostly on the money, is that a company linked to China is joining “the race to offer AI model development as a service.”
Two quick points:
- Model development allows the provider to get a sneak peak at what the user of the system is trying to do. This means that information flows from customer to provider.
- The company in the “race” is one of some concern to certain governments and their representatives.
The write up says:
ByteDance, the Chinese owner of TikTok, is working on an open platform that will allow users to create their own chatbots, as the company races to catch up in generative artificial intelligence (AI) amid fierce competition that kicked off with last year’s launch of ChatGPT. The “bot development platform” will be launched as a public beta by the end of the month…
The cited article points out:
China’s most valuable unicorn has been known for using some form of AI behind the scenes from day one. Its recommendation algorithms are considered the “secret sauce” behind TikTok’s success. Now it is jumping into an emerging market for offering large language models (LLMs) as a service.
What other countries are beavering away on smart software? Will these drive in the slow lane or the fast lane?
Stephen E Arnold, December 7, 2023
Just for the Financially Irresponsible: Social Shopping
December 7, 2023
This essay is the work of a dumb dinobaby. No smart software required.
Amazon likes to make it as easy as possible for consumers to fork over their hard-earned cash on a whim. More steps between seeing a product and checking out means more time to reconsider a spontaneous purchase, after all. That is why the company has been working to integrate purchases into social media platforms. Payment-platform news site PYMNTS reports on the latest linkage in, “Amazon Extends Social Shopping Efforts with Snapchat Deal.” Amazon’s partnership with Meta had already granted it quick access to eyeballs and wallets at Facebook and Instagram. Now users of all three platforms will be able to link those social media accounts to their Amazon accounts. We are told:
“It’s a partnership that lets both companies play to their strengths: Amazon gets to help merchants find customers who might not have actively sought out their products. And Meta’s discovery-based model lets users receive targeted ads without searching for them. Amazon also has a deal with Pinterest, signed in April, designed to create more shoppable content by enhancing the platform’s offering of relevant products and brands. These partnerships are happening at a moment when social media has become a crucial tool for consumers to find new products.”
That is one way to put it. Here is another: The deals let Amazon take advantage of users’ cognitive haze: scrolling social media has been linked to information overload, shallow thinking, reduced attention span, and fragmented thoughts. A recipe for perfect victims. I mean, customers. We wonder what Meta is getting in exchange for handing them over?
Cynthia Murrell, December 7, 2023

