Another Big Consulting Firms Does Smart Software… Sort Of
September 3, 2024
This essay is the work of a dumb dinobaby. No smart software required.
Will programmers and developers become targets for prosecution when flaws cripple vital computer systems? That may be a good idea because pointing to the “algorithm” as the cause of a problem does not seem to reduce the number of bugs, glitches, and unintended consequences of software. A write up which itself may be a blend of human and smart software suggests change is afoot.
Thanks, MSFT Copilot. Good enough.
“Judge Rules $400 Million Algorithmic System Illegally Denied Thousands of People’s Medicaid Benefits” reports that software crafted by the services firm Deloitte did not work as the State of Tennessee assumed. Yep, assume. A very interesting word.
The article explains:
The TennCare Connect system—built by Deloitte and other contractors for more than $400 million—is supposed to analyze income and health information to automatically determine eligibility for benefits program applicants. But in practice, the system often doesn’t load the appropriate data, assigns beneficiaries to the wrong households, and makes incorrect eligibility determinations, according to the decision from Middle District of Tennessee Judge Waverly Crenshaw Jr.
At one time, Deloitte was an accounting firm. Then it became a consulting outfit a bit like McKinsey. Well, a lot like that firm and other blue-chip consulting outfits. In its current manifestation, Deloitte is into technology, programming, and smart software. Well, maybe the software is smart but the programmers and the quality control seem to be riding in a different school bus from some other firms’ technical professionals.
The write up points out:
Deloitte was a major beneficiary of the nationwide modernization effort, winning contracts to build automated eligibility systems in more than 20 states, including Tennessee and Texas. Advocacy groups have asked the Federal Trade Commission to investigate Deloitte’s practices in Texas, where they say thousands of residents are similarly being inappropriately denied life-saving benefits by the company’s faulty systems.
In 2016, Cathy O’Neil published Weapons of Math Destruction. Her book had a number of interesting examples of what goes wrong when careless people make assumptions about numerical recipes. If she does another book, she may include this Deloitte case.
Several observations:
- The management methods used to create these smart systems require scrutiny. The downstream consequences are harmful.
- The developers and programmers can be fired, but the failure to have remediating processes in place when something unexpected surfaces must be part of the work process.
- Less informed users and more smart software strikes me as a combustible mixture. When a system ignites, the impacts may reverberate in other smart systems. What entity is going to fix the problem and accept responsibility? The answer is, “No one” unless there are significant consequences.
The State of Tennessee’s experience makes clear that a “brand name”, slick talk, an air of confidence, and possibly ill-informed managers can do harm. The opioid misstep was bad. Now imagine that type of thinking in the form of a fast, indifferent, and flawed “system.” Firing a 25 year old is not the solution.
Stephen E Arnold, September 3, 2024
Google Claims It Fixed Gemini’s “Degenerate” People
September 2, 2024
History revision is a problem. It’s been a problem for…well…since the start of recorded history. The Internet and mass media are infamous for being incorrect about historical facts, but image generating AI, like Google’s Gemini, is even worse. Tech Crunch explains what Google did to correct its inaccurate algorithm: “Google Says It’s Fixed Gemini’s People-Generating Feature.”
Google released Gemini in early 2023, then over a year later paused the chatbot for being too “woke,”“politically incorrect,” and “historically inaccurate.” The worst of Gemini’s offending actions was when it (for example) was asked to depict a Roman legion as ethnically diverse which fit the woke DEI agenda, while when it was asked to make an equally ethnically diverse Zulu warrior army Gemini only returned brown-skinned people. The latter is historically accurate, because Google doesn’t want to offend western ethnic minorities and, of course, Europe (where light skinned pink people originate) was ethnically diverse centuries ago.
Everything was A OK, until someone invoked Godwin’s Law by asking Gemini to generate (degenerate [sic]) an image of Nazis. Gemini returned an ethnically diverse picture with all types of Nazis, not the historically accurate light-skinned Germans-native to Europe.
Google claims it fixed Gemini and it took way longer than planned. The people generative feature is only available to paid Gemini plans. How does Google plan to make its AI people less degenerative? Here’s how:
“According to the company, Imagen 3, the latest image-generating model built into Gemini, contains mitigations to make the people images Gemini produces more “fair.” For example, Imagen 3 was trained on AI-generated captions designed to ‘improve the variety and diversity of concepts associated with images in [its] training data,’ according to a technical paper shared with TechCrunch. And the model’s training data was filtered for “safety,” plus ‘review[ed] … with consideration to fairness issues,’ claims Google…;We’ve significantly reduced the potential for undesirable responses through extensive internal and external red-teaming testing, collaborating with independent experts to ensure ongoing improvement,” the spokesperson continued. ‘Our focus has been on rigorously testing people generation before turning it back on.’”
Google will eventually make it work and the company is smart to limit Gemini’s usage to paid subscriptions. Limiting the user pool means Google can better control the chatbot and (if need be) turn it off. It will work until bad actors learn how to abuse the chatbot again for their own sheets and giggles.
Whitney Grace, September 2, 2024
What Is a Good Example of AI Enhancing Work Processes? Klarna
August 30, 2024
This essay is the work of a dumb dinobaby. No smart software required.
Klarna is a financial firm in Sweden. (Did you know Sweden has a violence problem?) The country also has a company which is quite public about the value of smart software to its operations. “‘Our Chatbots Perform The Tasks Of 700 People’: Buy Now, Pay Later Company Klarna To Axe 2,000 Jobs As AI Takes On More Roles” reports:
Klarna has already cut over 1,000 employees and plans to remove nearly 2,000 more
Yep, that’s the use case. Smart software allows the firm’s leadership to terminate people. (Does that managerial attitude contribute to the crime problem in Sweden? Of course not. The company is just being efficient.)
The write up states:
Klarna claims that its AI-powered chatbot can handle the workload previously managed by 700 full-time customer service agents. The company has reduced the average resolution time for customer service inquiries from 11 minutes to two while maintaining consistent customer satisfaction ratings compared to human agents.
What’s the financial payoff for this leader in AI deployment? The write up says:
Klarna reported a 73 percent increase in average revenue per employee compared to last year.
Klarna, however, is humane. According to the article:
Notably, none of the workforce reductions have been achieved through layoffs. Instead, the company has relied on a combination of natural staff turnover and a hiring freeze implemented last year.
That’s a relief. Some companies would deploy Microsoft software with AI and start getting rid of people. The financial benefits are significant. Plus, as long as the company chugs along in good enough mode, the smart software delivers a win for the firm.
Are there any downsides? None in the write up. There is a financial payoff on the horizon. The article states:
In July [2024], Chrysalis Investments, a major Klarna investor, provided a more recent valuation estimate, suggesting that the fintech firm could achieve a valuation between 15 billion and 20 billion dollars in an initial public offering.
But what if the AI acts like a brake on firm’s revenue growth and sales? Hey, this is an AI success. Why be negative? AI is wonderful and Klarna’s customers appear to be thrilled with smart software. I personally love speaking to smart chatbots, don’t you?
Stephen E Arnold, August 30, 2024
Can an AI Journalist Be Dragged into Court and Arrested?
August 28, 2024
This essay is the work of a dumb dinobaby. No smart software required.
I read “Being on Camera Is No Longer Sensible: Persecuted Venezuelan Journalists Turn to AI.” The main idea is that a video journalist can present the news, not a “real” human journalist. The write up says:
In daily broadcasts, the AI-created newsreaders have been telling the world about the president’s post-election crackdown on opponents, activists and the media, without putting the reporters behind the stories at risk.
The write up points out:
The need for virtual-reality newscasters is easy to understand given the political chill that has descended on Venezuela since Maduro was first elected in 2013, and has worsened in recent days.
Suppression of information seems to be increasing. With the detainment of Pavel Durov, Russia has expressed concern about this abrogation of free speech. Ukrainian government officials might find this rallying in support of Mr. Durov ironic. In April 2024, Telegram filtered content from Ukraine to Russian citizens.
An AI news presenter sitting in a holding cell. Government authorities want to discuss her approach to “real” news. Thanks, MSFT Copilot. Good enough.
Will AI “presenters” or AI “content” prevent the type of intervention suggested by Venezuelan-type government officials?
Several observations:
- Individual journalists may find that the AI avatar “plays” may not fool or amuse certain government authorities. It is possible that the use of AI and the coverage of the tactic in highly-regarded “real” news services exacerbates the problem. Somewhere, somehow a human is behind the avatar. The obvious question is, “Who is that person?”
- Once the individual journalist behind an avatar has been identified and included in an informal or formal discussion, who or what is next in the AI food chain? Is it an organization associated with “free speech”, an online service, or an organization like a giant high-technology company. What will a government do to explore a chat with these entities?
- Once the organization has been pinpointed, what about the people who wrote the software powering the avatar? What will a government do to interact with these individuals?
Step 1 seems fairly simple. Step 2 may involve some legal back and forth, but the process is not particularly novel. However, Step 3 presents a bit of a conundrum, and it presents some challenges. Lawyers and law enforcement for the country whose “laws” have been broken have to deal with certain protocols. Embracing different techniques can have significant political consequences.
My view is that using AI intermediaries is an interesting use case for smart software. The AI doomsayers invoke smart software taking over. A more practical view of AI is that its use can lead to actions which are at first tempests in tea pots. Then when a cluster of AI tea pots get dumped over, difficult to predict activities can emerge. The Venezuelan government’s response to AI talking heads delivering the “real” news is a precursor and worth monitoring.
Stephen E Arnold, August 28, 2024
Am I Overly Sensitive to X (Twitter) Images?
August 28, 2024
X AI Creates Disturbing Images
The AI division of X, xAI, has produced a chatbot called Grok. Grok includes an image generator. Unlike ChatGPT and other AIs from major firms, Grok seems to have few guardrails. In fact, according to The Verge, “X’s New AI Image Generator Will Make Anything from Taylor Swift in Lingerie to Kamala Harris with a Gun.” Oh, if one asks Grok directly, it claims to have sensible guardrails and will even list a few. However, writes senior editor Adi Robertson:
“But these probably aren’t real rules, just likely-sounding predictive answers being generated on the fly. Asking multiple times will get you variations with different policies, some of which sound distinctly un-X-ish, like ‘be mindful of cultural sensitivities.’ (We’ve asked xAI if guardrails do exist, but the company hasn’t yet responded to a request for comment.) Grok’s text version will refuse to do things like help you make cocaine, a standard move for chatbots. But image prompts that would be immediately blocked on other services are fine by Grok.”
The article lists some very uncomfortable experimental images Grok has created and even shares a few. See the write-up if curious. We learn one X user found some frightening loopholes. When he told the AI he was working on medical or crime scene analysis, it allowed him to create some truly disturbing images. The write-up shares blurred versions of these. The same researcher says he got Grok to create child pornography (though he wisely does not reveal how). All this without a “Created with AI” watermark added by other major chatbots. Although he is aware of this issue, X owner Elon Musk characterizes this iteration of Grok as an “intermediate step” that allows users “to have some fun.” That is one way to put it. Robertson notes:
“Grok’s looseness is consistent with Musk’s disdain for standard AI and social media safety conventions, but the image generator is arriving at a particularly fraught moment. The European Commission is already investigating X for potential violations of the Digital Safety Act, which governs how very large online platforms moderate content, and it requested information earlier this year from X and other companies about mitigating AI-related risk. … The US has far broader speech protections and a liability shield for online services, and Musk’s ties with conservative figures may earn him some favors politically.”
Perhaps. But US legislators are working on ways to regulate deepfakes that impersonate others, particularly sexually explicit imagery. Combine that with UK regulator Ofcom’s upcoming enforcement of the OSA, and Musk may soon find a permissive Grok to be a lot less fun.
Cynthia Murrell, August 28, 2024
Anthropic AI: New Allegations of Frisky Behavior
August 27, 2024
This essay is the work of a dumb dinobaby. No smart software required.
Who knew high school science club members would mature into such frisky people. But rules are made to be broken. Apologies make the problem go away. Perhaps in high school with some indulgent faculty advisors? In the real world where lawyers are more plentiful than cardinals in Kentucky, apologies may not mean anything. I learned that the highly-regarded AI outfit Anthropic will be spending some time with the firm’s lawyers.
“Anthropic Faces New Class-Action Lawsuit from Book Authors” reported:
AI company Anthropic is still battling a lyrics-focused lawsuit from music publishers, but now it has a separate legal fight on its hands. Authors Andrea Bartz, Charles Graeber and Kirk Wallace Johnson are suing the company in a class-action lawsuit in California. As with the music publishers, their focus is on the training of Anthropic’s Claude chatbot.
I anticipate a few of the really smart and oh-so-busy wizards will be sitting in a conference room doing the deposition thing. That involves lawyers who are not particularly as scientifically oriented as AI wizards trying to make sense of Anthropic’s use of OPW (other people’s work) without permission. If you are a fan of legal filings, you can read the 20-page document at this link.
Those AI wizards are clever, aren’t they?
Stephen E Arnold, August 27, 2024
A Tool to Fool AI Detectors
August 27, 2024
Here is one way to fool the automated ai detectors: AIHumanizer. A Redditor in the r/ChatGPT subreddit has created a “New Tool that Removes Frequent AI Phrases like ‘Unleash’ or ‘Elevate’.” Sufficient_Ice_6113 writes:
“I created a simple tool which lets you humanize your texts and remove all the robotic or repeated phrases ChatGPT usually uses like ‘Unleash’ ‘elevate’ etc. here is a longer list of them: Most used AI words 100 most common AI words. To remove them and improve your texts you can use aihumanizer.com which completely rewrites your text to be more like it was written by a human. It also makes it undetectable by AI detectors as a side effect because the texts don’t have the common AI pattern any longer. It is really useful in case you want to use an AI text for any work related things, most people can easily tell an email or application to a job was written by AI when it includes ‘Unleash’ and ‘elevate’ a dozen times.”
The author links to their example result at Undetectable AI, the “AI Detector and Humanizer.” That site declares the sample text “appears human.” See the post’s comments for another example, submitted by benkei_sudo. They opine the tool “is a good start, but needs a lot of improvement” because, though it fools the AI checkers, an actual human would have their suspicions. That could be a problem for emails or press releases that, at least for now, tend to be read by people. But how many actual humans are checking resumes or standardized-test essays these days? Besides, Sufficient Ice emphasizes, AIHumanizer offers an upgraded version for an undisclosed price (though a free trial is available). The AI-content arms race continues.
Cynthia Murrell, August 27, 2024
AI Snake Oil Hisses at AI
August 23, 2024
This essay is the work of a dumb dinobaby. No smart software required.
Enthusiasm for certain types of novel software or gadgets rises and falls. The Microsoft marketing play with OpenAI marked the beginning of the smart software hype derby. Google got the message and flipped into Red Alert mode. Now about 20 months after Microsoft’s announcement about its AI tie up with Sam AI-Man, we have Google’s new combo: AI in a mobile phone. Bam! Job done. Slam dunk.
Thanks, MSFT Copilot. On top of the IPv6 issue? Oh, too bad.
I wonder if the Googlers were thinking along the same logical lines at the authors of “AI Companies Are Pivoting from Creating Gods to Building Products. Good.”
The snake oil? Dripping. Here’s a passage from the article I noted:
AI companies are collectively planning to spend a trillion dollars on hardware and data centers, but there’s been relatively little to show for it so far.
A trillion? That’s a decent number. Sam AI-Man wants more, but the scale is helpful, particularly when most numbers are mere billions in the zoom zoom world of smart software.
The most important item in the write up, in my opinion, is the list of five “challenges.” The article focuses on consumer AI. A couple of these apply to the enterprise sector as well. Let’s look at the five “challenges.” These are and, keep in mind, I a paraphrasing as dinobabies often do:
- Cost. In terms of consumers, one must consider making Hamster Kombat smart. (This is a Telegram dApp.) My team informed me that this little gem has 35 million users, and it is still growing. Imagine the computational cost to infuse each and every Hamster Kombat “game” player with AI goodness. But it’s a game and a distributed one at that, one might say. Someone has to pay for these cycles. And Hamster Kombat is not on the radar of most consumers’ radar. Telegram has about 950 million users, so 35 million users comes from that pool. What are the costs of AI infused games outside of a walled garden. And the hardware? And the optimization engineering? And the fooling around with ad deals? Costs are not a hurdle. Costs might be a Grand Canyon-scale leap into a financial mud bank.
- Reliability. Immature systems and methods, training content issues (real and synthetic), and the fancy math which uses a lot of probability procedures guarantees some interesting outputs.
- Privacy. The consumer or user facing services are immature. Developers want to get something to most work in a good enough manner. Then security may be discussed. But on to the next feature. As a result, I am not sure if anyone has a decent grasp of the security issues which smart software might pose. Look at Microsoft. It’s been around almost half a century, and I learn about new security problems every day. Is smart software different?
- Safety and security. This is a concomitant to privacy. Good luck knowing what the systems do or do not do.
- User interface. I am a dinobaby. The interfaces are pale, low contrast, and change depending on what a user clicks. I like stability. Smart software simply does not comprehend that word.
Good points. My view is that the obstacle to surmount is money. I am not sure that the big outfits anticipated the costs of their sally into the hallucinating world of AI. And what are those costs, pray tell. Here’s are selected items the financial managers at the Big Dogs are pondering along with the wording of their updated LinkedIn profile:
- Litigation. Remarks by some icons of the high technology sector have done little to assuage the feelings of those whose content was used without permission or compensation. Some, some people. A few Big Dogs are paying cash to scrape.
- Power. Yep, electricity, as EV owners know, is not really free.
- Water, Yep, modern machines produce heat if what I learned in physics was actual factual.
- People (until they can be replaced by a machine that does not require health care or engage in signing petitions).
- Data and indexing. Yep, still around and expensive.
- License fees. They are comin’ round the mountain of legal filings.
- Meals, travel and lodging. Leadership will be testifying, probably a lot.
- PR advisors and crisis consultants. See the first bullet, Litigation.
However, slowly but surely some commercial sectors are using smart software. There is an AI law firm. There are dermatologists letting AI determine what to cut, freeze, or ignore. And there are college professors using AI to help them do “original” work and create peer-review fodder.
There was a snake in the Garden of Eden, right?
Stephen E Arnold, August 23, 2024
Google Leadership Versus Valued Googlers
August 23, 2024
This essay is the work of a dumb dinobaby. No smart software required.
The summer in rural Kentucky lingers on. About 2,300 miles away from the Sundar & Prabhakar Comedy Show’s nerve center, the Alphabet Google YouTube DeepMind entity is also “cyclonic heating from chaotic employee motion.” What’s this mean? Unsteady waters? Heat stroke? Confusion? Hallucinations? My goodness.
The Google leadership faces another round of employee pushback. I read “Workers at Google DeepMind Push Company to Drop Military Contracts.”
How could the Google smart software fail to predict this pattern? My view is that smart software has some limitations when it comes to managing AI wizards. Furthermore, Google senior managers have not been able to extract full knowledge value from the tools at their disposal to deal with complexity. Time Magazine reports:
Nearly 200 workers inside Google DeepMind, the company’s AI division, signed a letter calling on the tech giant to drop its contracts with military organizations earlier this year, according to a copy of the document reviewed by TIME and five people with knowledge of the matter. The letter circulated amid growing concerns inside the AI lab that its technology is being sold to militaries engaged in warfare, in what the workers say is a violation of Google’s own AI rules.
Why are AI Googlers grousing about military work? My personal view is that the recent hagiography of Palantir’s Alex Karp and the tie up between Microsoft and Palantir for Impact Level 5 services means that the US government is gearing up to spend some big bucks for warfighting technology. Google wants — really needs — this revenue. Penalties for its frisky behavior as what Judge Mehta describes and “monopolistic” could put a hit in the git along of Google ad revenue. Therefore, Google’s smart software can meet the hunger militaries have for intelligent software to perform a wide variety of functions. As the Russian special operation makes clear, “meat based” warfare is somewhat inefficient. Ukrainian garage-built drones with some AI bolted on perform better than a wave of 18 year olds with rifles and a handful of bullets. The example which sticks in my mind is a Ukrainian drone spotting a Russian soldier in the field partially obscured by bushes. The individual is attending to nature’s call.l The drone spots the “shape” and explodes near the Russian infantry man.
A former consultant faces an interpersonal Waterloo. How did that work out for Napoleon? Thanks, MSFT Copilot. Are you guys working on the IPv6 issue? Busy weekend ahead?
Those who study warfare probably have their own ah-ha moment.
The Time Magazine write up adds:
Those principles state the company [Google/DeepMind] will not pursue applications of AI that are likely to cause “overall harm,” contribute to weapons or other technologies whose “principal purpose or implementation” is to cause injury, or build technologies “whose purpose contravenes widely accepted principles of international law and human rights.”) The letter says its signatories are concerned with “ensuring that Google’s AI Principles are upheld,” and adds: “We believe [DeepMind’s] leadership shares our concerns.”
I love it when wizards “believe” something.
Will the Sundar & Prabhakar brain trust do believing or banking revenue from government agencies eager to gain access to advantage artificial intelligence services and systems? My view is that the “believers” underestimate the uncertainty arising from potential sanctions, fines, or corporate deconstruction the decision of Judge Mehta presents.
The article adds this bit of color about the Sundar & Prabhakar response time to Googlers’ concern about warfighting applications:
The [objecting employees’] letter calls on DeepMind’s leaders to investigate allegations that militaries and weapons manufacturers are Google Cloud users; terminate access to DeepMind technology for military users; and set up a new governance body responsible for preventing DeepMind technology from being used by military clients in the future. Three months on from the letter’s circulation, Google has done none of those things, according to four people with knowledge of the matter. “We have received no meaningful response from leadership,” one said, “and we are growing increasingly frustrated.”
“No meaningful response” suggests that the Alphabet Google YouTube DeepMind rhetoric is not satisfactory.
The write up concludes with this paragraph:
At a DeepMind town hall event in June, executives were asked to respond to the letter, according to three people with knowledge of the matter. DeepMind’s chief operating officer Lila Ibrahim answered the question. She told employees that DeepMind would not design or deploy any AI applications for weaponry or mass surveillance, and that Google Cloud customers were legally bound by the company’s terms of service and acceptable use policy, according to a set of notes taken during the meeting that were reviewed by TIME. Ibrahim added that she was proud of Google’s track record of advancing safe and responsible AI, and that it was the reason she chose to join, and stay at, the company.
With Microsoft and Palantir, among others, poised to capture some end-of-fiscal-year money from certain US government budgets, the comedy act’s headquarters’ planners want a piece of the action. How will the Sundar & Prabhakar Comedy Act handle the situation? Why procrastinate? Perhaps the comedy act hopes the issue will just go away. The complaining employees have short attention spans, rely on TikTok-type services for information, and can be terminated like other Googlers who grouse, picket, boycott the Foosball table, or quiet quit while working on a personal start up.
The approach worked reasonably well before Judge Mehta labeled Google a monopoly operation. It worked when ad dollars flowed like latte at Philz Coffee. But today is different, and the unsettled personnel are not a joke and add to the uncertainty some have about the Google we know and love.
Stephen E Arnold, August 23, 2024
AI Balloon: Losing Air and Boring People
August 22, 2024
Though tech bros who went all-in on AI still promise huge breakthroughs just over the horizon, Windows Central’s Kevin Okemwa warns: “The Generative AI Bubble Might Burst, Sending the Tech to an Early Deathbed Before Its Prime: ‘Don’t Believe the Hype’.” Sadly, it is probably too late to save certain career paths, like coding, from an AI takeover. But perhaps a slowdown would conserve some valuable resources. Wouldn’t that be nice? The write-up observes:
“While AI has opened up the world to endless opportunities and untapped potential, its hype might be short-lived, with challenges abounding. Aside from its high water and power demands, recent studies show that AI might be a fad and further claim that 30% of its projects will be abandoned after proof of concept. Similar sentiments are echoed in a recent Blood In The Machine newsletter, which points out critical issues that might potentially lead to ‘the beginning of the end of the generative AI boom.’ From the Blood in the Machine newsletter analysis by Brian Merchant, who is also the Los Angeles Times’ technology columnist:
‘This is it. Generative AI, as a commercial tech phenomenon, has reached its apex. The hype is evaporating. The tech is too unreliable, too often. The vibes are terrible. The air is escaping from the bubble. To me, the question is more about whether the air will rush out all at once, sending the tech sector careening downward like a balloon that someone blew up, failed to tie off properly, and let go—or, more slowly, shrinking down to size in gradual sputters, while emitting embarrassing fart sounds, like a balloon being deliberately pinched around the opening by a smirking teenager.’”
Such evocative imagery. Merchant’s article also notes that, though Enterprise AI was meant to be the way AI firms made their money, it is turning out to be a dud. There are several reasons for this, not the least of which is AI models’ tendency to “hallucinate.”
Okemwa offers several points to support Merchant’s deflating-balloon claim. For example, Microsoft was recently criticized by investors for wasting their money on AI technology. Then there NVIDIA: The chipmaker recently became the most valuable company in the world thanks to astronomical demand for its hardware to power AI projects. However, a delay of its latest powerful chip dropped its stock’s value by 5%, and market experts suspect its value will continue to decline. The write-up also points to trouble at generative AI’s flagship firm, OpenAI. The company is plagued by a disturbing exodus of top executives, rumors of pending bankruptcy, and a pesky lawsuit from Elon Musk.
Speaking of Mr. Musk, how do those who say AI will kill us all respond to the potential AI downturn? Crickets.
Cynthia Murrell, August 22, 2024