Enter the Dragon: America Is Unhealthy

November 4, 2024

dino orange_thumb_thumbWritten by a humanoid dinobaby. No AI except the illustration.

The YouTube video “A Genius Girl Who Is Passionate about Repairing Machines” presents a simple story in a 38 minute video. The idea is that a young woman with no help fixes a broken motorcycles with basic hand tools outside in what looks like a hoarder’s backyard. The message is: Wow, she is smart and capable. Don’t you wish you knew person like this who could repair your broken motorcycle.

This video is from @vutvtgamming and not much information is provided. After watching this and similar videos like “Genius Girl Restored The 280mm Lathe From 50 Years Ago And Made It Look Like”, I feel pretty stupid for an America dinobaby. I don’t think I can recall meeting a person with similar mechanical skills when I worked at Keystone Steel, Halliburton Nuclear, or Booz, Allen & Hamilton’s Design & Development division. The message I carried away was: I was stupid as were many people with whom I associated.

image

Thanks, MSFT Copilot. Good enough. (I slipped a put down through your filters. Imagine that!)

I picked up a similar vibe when I read “Today’s AI Ecosystem Is Unsustainable for Most Everyone But Nvidia, Warns Top Scholar.” On the surface, the ZDNet write up is an interview with the “scholar” Kai-Fu Lee, who, according to the article:

served as founding director of Microsoft Research Asia before working at Google and Apple, founded his current company, Sinovation Ventures, to fund startups such as 01.AI, which makes a generative AI search engine called BeaGo.

I am not sure how “scholar” correlates with commercial work for US companies and running an investment firm with a keen interest in Chinese start ups. I would not use the word “scholar.” My hunch is that the intent of Kai-Fu Lee is to present as simple and obvious something that US companies don’t understand. The interview is a different approach to explaining how advanced Kai-Fu Lee’s expertise is. He is, via this interview, sharing an opinion that the US is creating a problem and overlooking the simple solution. Just like the young woman able to repair a motorcycle or the lass fixing up a broken industrial lathe alone, the American approach does not get the job done.

What does ZDNet present as Kai-Fu Lee’s message. Here are a couple of examples:

“The ecosystem is incredibly unhealthy,” said Kai-Fu Lee in a private discussion forum earlier this month. Lee was referring to the profit disparity between, on the one hand, makers of AI infrastructure, including Nvidia and Google, and, on the other hand, the application developers and companies that are supposed to use AI to reinvent their operations.

Interesting. I wonder if the “healthy” ecosystem might be China’s approach of pragmatism and nuts-and-bolts evidenced in the referenced videos. The unhealthy versus healthy is a not-so-subtle message about digging one’s own grave in my opinion. The “economics” of AI are unhealthy, which seems to say, “America’s approach to smart software is going to kill it. A more healthy approach is the one in which government and business work to create applications.” Translating: China, healthy; America, sick as a dog.

Here’s another statement:

Today’s AI ecosystem, according to Lee, consists of Nvidia, and, to a lesser extent, other chip makers such as Intel and Advanced Micro Devices. Collectively, the chip makers rake in $75 billion in annual chip sales from AI processing. “The infrastructure is making $10 billion, and apps, $5 billion,” said Lee. “If we continue in this inverse pyramid, it’s going to be a problem,” he said.

Who will flip the pyramid? Uganda, Lao PDR, Greece? Nope, nope, nope. The flip will take an outfit with a strong mind and body. A healthy entity is needed to flip the pyramid. I wonder if that strong entity is China.

Here’s Kai-Fu kung fu move:

He recommended that companies build their own vertically integrated tech stack the way Apple did with the iPhone, in order to dramatically lower the cost of generative AI. Lee’s striking assertion is that the most successful companies will be those that build most of the generative AI components — including the chips — themselves, rather than relying on Nvidia. He cited how Apple’s Steve Jobs pushed his teams to build all the parts of the iPhone, rather than waiting for technology to come down in price.

In the write up Kai-Fu Lee refers to “we”. Who is included in that we? Excluded will be the “unhealthy.” Who is left? I would suggest that the pragmatic and application focused will be the winners. The reason? The “we” includes the healthy entities. Once again I am thinking of China’s approach to smart software.

What’s the correct outcome? Kai-Fu Lee allegedly said:

What should result, he said, is “a smaller, leaner group of leaders who are not just hiring people to solve problems, but delegating to smart enterprise AI for particular functions — that’s when this will make the biggest deal.”

That sounds like the Chinese approach to a number of technical, social, and political challenges. Healthy? Absolutely.

Several observations:

  1. I wonder if ZDNet checked on the background of the “scholar” interviewed at length?
  2. Did ZDNet think about the “healthy” versus “unhealthy” theme in the write up?
  3. Did ZDNet question the “scholar’s” purpose in explaining what’s wrong with the US approach to smart software?

I think I know the answer. The ZDNet outfit and the creators of this unusual private interview believe that the young women rebuilt complicated devices without any assistance. Smart China; dumb America. I understand the message which seems to have not been internalized by ZDNet. But I am a dumb dinobaby. What do I know? Exactly. Unhealthy that American approach to AI.

Stephen E Arnold, October 30, 2024

Great Moments in Marketing: MSFT Copilot, the Salesforce Take

November 1, 2024

dino orangeA humanoid wrote this essay. I tried to get MSFT Copilot to work, but it remains dead. That makes four days with weird messages about a glitch. That’s the standard: Good enough.

It’s not often I get a kick out of comments from myth-making billionaires. I read through the boy wonder to company founder titled “An Interview with Salesforce CEO Marc Benioff about AI Abundance.” No paywall on this essay, unlike the New York Times’ downer about smart software which appears to have played a part in a teen’s suicide. Imagine when Perplexity can control a person’s computer. What exciting stories will appear. Here’s an example of what may be more common in 2025.

image

Great moments in Salesforce marketing. A senior Agentforce executive considers great marketing and brand ideas of the past. Inspiration strikes. In 2024, he will make fun of Clippy. Yes, a 1995 reference will resonate with young deciders in 2024. Thanks, Stable Diffusion. You are working; MSFT Copilot is not.

The focus today is a single statement in this interview with the big dog of Salesforce. Here’s the quote:

Well, I guess it wasn’t the AGI that we were expecting because I think that there has been a level of sell, including Microsoft Copilot, this thing is a complete disaster. It’s like, what is this thing on my computer? I don’t even understand why Microsoft is saying that Copilot is their vision of how you’re going to transform your company with AI, and you are going to become more productive. You’re going to augment your employees, you’re going to lower your cost, improve your customer relationships, and fundamentally expand all your KPIs with Copilot. I would say, “No, Copilot is the new Clippy”, I’m even playing with a paperclip right now.

Let’s think about this series of references and assertions.

First, there is the direct statement “Microsoft Copilot, this thing is a complete disaster.” Let’s assume the big dog of Salesforce is right. The large and much loved company — Yes, I am speaking about Microsoft — rolled out a number of implementations, applications, and assertions. The firm caught everyone’s favorite Web search engine with its figurative pants down like a hapless Russian trooper about to be dispatched by a Ukrainian drone equipped with a variant of RTX. (That stuff goes bang.) Microsoft “won” a marketing battle and gained the advantage of time. Google with its Sundar & Prabhakar Comedy Act created an audience. Microsoft seized the opportunity to talk to the audience. The audience applauded. Whether the technology worked, in my opinion was secondary. Microsoft wanted to be seen as the jazzy leader.

Second, the idea of a disaster is interesting. Since Microsoft relied on what may be the world’s weirdest organizational set up and supported the crumbling structure, other companies have created smart software which surfs on Google’s transformer ideas. Microsoft did not create a disaster; it had not done anything of note in the smart software world. Microsoft is a marketer. The technology is a second class citizen. The disaster is that Microsoft’s marketing seems to be out of sync with what the PowerPoint decks say. So what’s new? The answer is, “Nothing.” The problem is that some people don’t see Microsoft’s smart software as a disaster. One example is Palantir, which is Microsoft’s new best friend. The US government cannot rely on Microsoft enough. Those contract renewals keep on rolling. Furthermore the “certified” partners could not be more thrilled. Virtually every customer and prospect wants to do something with AI. When the blind lead the blind, a person with really bad eyesight has an advantage. That’s Microsoft. Like it or not.

Third, the pitch about “transforming your company” is baloney. But it sounds good. It helps a company do something “new” but within the really familiar confines of Microsoft software. In the good old days, it was IBM that provided the cover for doing something, anything, which could produce a marketing opportunity or a way to add a bit pizazz to a 1955 Chevrolet two door 210 sedan. Thus, whether the AI works or does not work, one must not lose sight of the fact that Microsoft centric outfits are going to go with Microsoft because most professionals need PowerPoint and the bean counters do not understand anything except Excel. What strikes me as important that Microsoft can use modest, even inept smart software, and come out a winner. Who is complaining? The Fortune 1000, the US Federal government, the legions of MBA students who cannot do a class project without Excel, PowerPoint, and Word?

Finally, the ultimate reference in the quote is Clippy. Personally I think the big dog at Salesforce should have invoked both Bob and Clippy. Regardless of the “joke” hooked to these somewhat flawed concepts, the names “Bob” and “Clippy” have resonance. Bob rolled out in 1995. Clippy helped so many people beginning in the same year. Decades later Microsoft’s really odd software is going to cause a 20 something who was not born to turn away from Microsoft products and services? Nope.

Let’s sum up: Salesforce is working hard to get a marketing lift by making Microsoft look stupid. Believe me. Microsoft does not need any help. Perhaps the big dog should come up with a marketing approach that replicates or comes close to what Microsoft pulled off in 2023. Google still hasn’t recovered fully from that kung fu blow.

The big dog needs to up its marketing game. Say Salesforce and what’s the reaction? Maybe meh.

Stephen E Arnold, November 1, 2024

Surprise: Those Who Have Money Keep It and Work to Get More

October 29, 2024

dino orangeWritten by a humanoid dinobaby. No AI except the illustration.

The Economist (a newspaper, not a magazine) published “Have McKinsey and Its Consulting Rivals Got Too Big?” Big is where the money is. Small consultants can survive but a tight market, outfits like Gerson Lehrman, and AI outputters of baloney like ChatGPT mean trouble in service land.

image

A next generation blue chip consultant produces confidential and secret reports quickly and at a fraction of the cost of a blue chip firm’s team of highly motivated but mostly inexperienced college graduates. Thanks, OpenAI, close enough.

The write up says:

Clients grappling with inflation and economic uncertainty have cut back on splashy consulting projects. A dearth of mergers and acquisitions has led to a slump in demand for support with due diligence and company integrations.

Yikes. What outfits will employ MBAs expecting $180,000 per year to apply PowerPoint and Excel skills to organizations eager for charts, dot points, and the certainty only 24 year olds have? Apparently fewer than before Covid.

How does the Economist know that consulting outfits face headwinds? Here’s an example:

Bain and Deloitte have paid some graduates to delay their start dates. Newbie consultants at a number of firms complain that there is too little work to go around, stunting their career prospects. Lay-offs, typically rare in consulting, have become widespread.

Consulting firms have chased projects in China but that money machine is sputtering. The MBA crowd has found the Middle East a source of big money jobs. But the Economist points out:

In February the bosses of BCG, McKinsey and Teneo, a smaller consultancy, along with Michael Klein, a dealmaker, were hauled before a congressional committee in Washington after failing to hand over details of their work for Saudi Arabia’s Public Investment Fund.

The firm’s response was, “Staff clould be imprisoned…” (Too bad the opioid crisis folks’ admissions did not result in such harsh consequences.)

Outfits like Deloitte are now into cyber security with acquisitions like Terbium Labs. Others are in the “reskilling” game, teaching their consultants about AI. The idea is that those pollinated type A’s will teach the firms’ clients just what they need to know about smart software. Some MBAs have history majors and an MBA in social media. I wonder how that will work out.

The write up concludes:

The quicker corporate clients become comfortable with chatbots, the faster they may simply go directly to their makers in Silicon Valley. If that happens, the great eight’s short-term gains from AI could lead them towards irrelevance.

Wow, irrelevance. I disagree. I think that school relationships and the networks formed by young people in graduate school will produce service work. A young MBA who mother or father is wired in will be valuable to the blue chip outfits in the future.

My take on the next 24 months is:

  1. Clients will hire employees who use smart software and can output reports with the help of whatever AI tools get hyped on LinkedIn.
  2. The blue chip outfits will get smaller and go back to their carpeted havens and cook up some crises or trends that other companies with money absolutely have to know about.
  3. Consulting firms will do the start up play. The failure rate will be interesting to calculate. Consultants are not entrepreneurs, but with connections the advice givers can tap their contacts for some tailwind.

I have worked at a blue chip outfit. I have done some special projects for outfits trying to become blue chip outfits. My dinobaby point of view boils down to seeing the Great Eight becoming the Surviving Six and then the end game, the Tormenting Two.

What picks up the slack? Smart software. Today’s systems generate the same type of normalized pablum many consulting firms provide. Note to MBAs: There will be jobs available for individuals who know how to perform Search GEO (generated engine optimization).

Stephen E Arnold, October 29, 2024

That AI Technology Is Great for Some Teens

October 29, 2024

The New York Times ran and seemed to sensationalized a story about a young person who formed an emotional relationship with AI from Character.ai. I personally like the Independent’s story “The Disturbing Messages Shared between AI Chatbot and Teen Who Took His Own Life,” which was redisplayed on the estimable MSN.com. If the link is dead, please, don’t write Beyond Search. Contact those ever responsible folks at Microsoft. The British “real” news outfit said:

Sewell [the teen] had started using Character.AI in April 2023, shortly after he turned 14. In the months that followed, the teen became “noticeably withdrawn,” withdrew from school and extracurriculars, and started spending more and more time online. His time on Character.AI grew to a “harmful dependency,” the suit states.

Let’s shift gears. The larger issues is that social media has changed the way humans interact with each other and smart software. The British are concerned. For instance, the BBC delves into how social media has changed human interaction: “How Have Social Media Algorithms Changed The Way We Interact?”

Social media algorithms are fifteen years old. Facebook unleashed the first in 2009 and the world changed. The biggest problem associated with social media algorithms are the addiction and excess. Teenagers and kids are the populations most affected by social media and adults want to curb their screen time. Global governments are steeping up to enforce rules on social media.

The US could ban TikTok if the Chinese parent company doesn’t sell it. The UK implemented a new online safety act for content moderation, while the EU outlined new rules for tech companies. The rules will fine them 6% of turnover and suspend them if they don’t prevent election interference. Meanwhile Brazil banned X for a moment until the company agreed to have a legal representative in the country and blocked accounts that questioned the legitimacy of the country’s last election.

While the regulation laws pose logical arguments, they also limit free speech. Regulating the Internet could tip the scale from anarchy to authoritarianism:

“Adam Candeub is a law professor and a former advisor to President Trump, who describes himself as a free speech absolutist. Social media is ‘polarizing, it’s fractious, it’s rude, it’s not elevating – I think it’s a terrible way to have public discourse”, he tells the BBC. “But the alternative, which I think a lot of governments are pushing for, is to make it an instrument of social and political control and I find that horrible.’ Professor Candeub believes that, unless ‘there is a clear and present danger’ posed by the content, ‘the best approach is for a marketplace of ideas and openness towards different points of view.’”

When Musk purchased X, he compared it to a “digital town square.” Social media, however, isn’t like a town square because the algorithms rank and deliver content based what eyeballs want to see. There isn’t fair and free competition of ideas. The smart algorithms shape free speech based on what users want to see and what will make money.

So where are we? Headed to the grave yard?

Whitney Grace, October 29, 2024

Fake Defined? Next Up Trust, Ethics, and Truth

October 28, 2024

dino orange_thumbAnother post from a dinobaby. No smart software required except for the illustration.

This is a snappy headline: “You Can Now Get Fined $51,744 for Writing a Fake Review Online.” The write up states:

This mandate includes AI-generated reviews (which have recently invaded Amazon) and also encompasses dishonest celebrity endorsements as well as testimonials posted by a company’s employees, relatives, or friends, unless they include an explicit disclaimer. The rule also prohibits brands from offering any sort of incentive to prompt such an action. Suppressing negative reviews is no longer allowed, nor is promoting reviews that a company knows or should know are fake.

So, what does “fake” mean? The word appears more than 160 times in the US government document.

My hunch is that the intrepid US Federal government does not want companies to hype their products with “fake” reviews. But I don’t see a definition of “fake.” On page 10 of the government document “Use of Consumer Reviews”, I noted:

“…the deceptive or unfair commercial acts or practices involving reviews or other endorsement.”

That’s a definition of sort. Other words getting at what I would call a definition are:

  • buying reviews (these can be non fake or fake it seems)
  • deceptive
  • false
  • manipulated
  • misleading
  • unfair

On page 23 of the government document, A. 465. – Definitions appears. Alas, the word “fake” is not defined.

The document is 163 pages long and strikes me as a summary of standard public relations, marketing, content marketing, and social media practices. Toss in smart software and Telegram-type BotFather capability and one has described the information environment which buzzes, zaps, and swirls 24×7 around anyone with access to any type of electronic communication / receiving device.

image

Look what You.com generated. A high school instructor teaching a debate class about a foundational principle.

On page 119, the authors of the government document arrive at a key question, apparently raised by some of the individuals sufficiently informed to ask “killer” questions; for example:

Several commenters raised concerns about the meaning of the term “fake” in the context of indicators of social media influence. A trade association asked, “Does ‘fake’ only mean that the likes and followers were created by bots or through fake accounts? If a social media influencer were to recommend that their followers also follow another business’ social media account, would that also be ‘procuring’ of ‘fake’ indicators of social media influence? . . . If the FTC means to capture a specific category of ‘likes,’ ‘follows,’ or other metrics that do not reflect any real opinions, findings, or experiences with the marketer or its products or services, it should make that intention more clear.”

Alas, no definition is provided. “Fake” exists in a cloud of unknowing.

What if the US government prosecutors find themselves in the position of a luminary who allegedly said: “Porn. I know it when I see it.” That posture might be more acceptable than trying to explain that an artificial intelligence content generator produced a generic negative review of an Italian restaurant. A competitor uses the output via a messaging service like Telegram Messenger and creates a script to plug in the name, location, and date for 1,000 Italian restaurants. The individual then lets the script rip. When investigators look into this defamation of Italian restaurants, the trail leads back to a virtual assert service provider crime as a service operation in Lao PDR. The owner of that enterprise resides in Cambodia and has multiple cyber operations supporting the industrialized crime as a service operation. Okay, then what?

In this example, “fake” becomes secondary to a problem as large or larger than bogus reviews on US social media sites.

What’s being done when actual criminal enterprises are involved in “fake” related work. According the the United Nations, in certain nation states, law enforcement is hampered and in some cases prevented from pursuing a bad actor.

Several observations:

  1. As most high school debaters learn on Day One of class: Define your terms. Present these in plain English, not a series of anecdotes and opinions.
  2. Keep the focus sharp. If reviews designed to damage something are the problem, focus on that. Avoid the hand waving.
  3. The issue exists due to a US government policy of looking the other way with regard to the large social media and online services companies. Why not become a bit more proactive? Decades of non-regulation cannot be buried under 160 page plus documents with footnotes.

Net net: “Fake,” like other glittering generalities cannot be defined. That’s why we have some interesting challenges in today’s world. Fuzzy is good enough.

PS. If you have money, the $50,000 fine won’t make any difference. Jail time will.

Stephen E Arnold, October 28, 2024

AI Has An Invisible Language. Bad Actors Will Learn It

October 28, 2024

Do you remember those Magic Eyes back from the 1990s? You needed to cross your eyes a certain way to see the pony or the dolphin. The Magic Eyes were a phenomenon of early computer graphics and it was like an exclusive club with a secret language. There’s a new secret language on the Internet generated by AI and it could potentially sneak in malicious acts says Ars Technica: “Invisible Text That AI Chatbots Understand And Humans Can’t? Yep, It’s A Thing.”

The secret text could potentially include harmful instructions into AI chatbots and other code. The purpose would be to steal confidential information and conduct other scams all without a user’s knowledge:

“The invisible characters, the result of a quirk in the Unicode text encoding standard, create an ideal covert channel that can make it easier for attackers to conceal malicious payloads fed into an LLM. The hidden text can similarly obfuscate the exfiltration of passwords, financial information, or other secrets out of the same AI-powered bots. Because the hidden text can be combined with normal text, users can unwittingly paste it into prompts. The secret content can also be appended to visible text in chatbot output.”

The steganographic framework is built into a text encoding network and LLMs and read it. Researcher Johann Rehberger ran two proof-of-concept attacks with the hidden language to discover potential risks. He ran the tests on Microsoft 365 Copilot to find sensitive information. It worked:

“When found, the attacks induced Copilot to express the secrets in invisible characters and append them to a URL, along with instructions for the user to visit the link. Because the confidential information isn’t visible, the link appeared benign, so many users would see little reason not to click on it as instructed by Copilot. And with that, the invisible string of non-renderable characters covertly conveyed the secret messages inside to Rehberger’s server.”

What is nefarious is that the links and other content generated by the steganographic code is literally invisible. Rehberger and his team used a tool to decode the attack. Regular users are won’t detect the attacks. As we rely more on AI chatbots, it will be easier to infiltrate a person’s system.

Thankfully the Big Tech companies are aware of the problem, but not before it will probably devastate some people and companies.

Whitney Grace, October 28, 2024

Google Is AI, Folks

October 24, 2024

Google’s legal team is certainly creative. In the face of the Justice Department’s push to break up the monopoly, reports Yahoo Finance, “Google’s New Antitrust Defense is AI.” Wait, what? Reporter Hamza Shaban points to a blog post by Google VP Lee-Anne Mulholland, writing:

“In Google’s view, the government’s heavy-handed approach to transforming the search market ignores the nascent developments in AI, the fresh competition in the space, and new modes of seeking information online, like AI-powered answer engines. The energy around AI and the potential disruption of how users interact with search is, competitively speaking, a negative for Google, said Wedbush analyst Dan Ives. But in another way, as a defense against antitrust charges, it’s a positive. ‘That’s an argument against monopoly that bodes well for Google,’ he said.”

Really? Some believe quite the opposite. We learn:

“‘The DOJ has specifically noted that this evolution in technology is precisely why they are intervening at this point in time,’ said Gil Luria, an analyst at DA Davidson. ‘They want to make sure that Google is not able to convert the monopoly it currently has in Search into a monopoly in AI Enhanced Search.’”

Exactly. Google is clearly a monopoly. We think their assertion means, "treat us special because we are special." This church-lady thinking may or may not work. We live in an interesting judicial moment.

Cynthia Murrell, October 24, 2024

OpenAI: An Illustration of Modern Management Acumen

October 23, 2024

dino orange_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumbJust a humanoid processing information related to online services and information access.

The Hollywood Reporter (!) published “What the Heck Is Going On At OpenAI? As executives flee with Warnings of Danger, the Company Says It Will Plow Ahead.” When I compare the Hollywood Reporter with some of the poohbah “real” news discussion of a company on track to lose an ballpark figure of $5 billion in 2024, the write up does a good job of capturing the managerial expertise on display at the company.

image

The wanna-be lion of AI is throwing a party. Will there be staff to attend? Thanks, MSFT Copilot. Good enough.

I worked through the write up and noted a couple of interesting passages. Let’s take a look at them and then ponder the caption in the smart software generated for my blog post. Full disclosure: I used the Microsoft Copilot version of OpenAI’s applications to create the art. Is it derivative? Heck, who knows when OpenAI is involved in crafting information with a click?

The first passage I circled is the one about the OpenAI chief technology officer bailing out of the high-flying outfit:

she left because she’d given up on trying to reform or slow down the company from within. Murati was joined in her departure from the high-flying firm by two top science minds, chief research officer Bob McGrew and researcher Barret Zoph (who helped develop ChatGPT). All are leaving for no immediately known opportunity.

That suggests stability in the virtual executive suite. I suppose the the prompt used to aid these wizards in their decision to find their future elsewhere was something like “Hello, ChatGTP 4o1, I want to work in a technical field which protects intellectual property, helps save the whales, and contributes to the welfare of those without deep knowledge of multi-layer neural networks. In order to find self-fulfillment not possible with YouTube TikTok videos, what do you suggest for a group of smart software experts? Please, provide examples of potential work paths and provide sources for the information. Also, do not include low probability job opportunities like sanitation worker in the Mission District, contract work for Microsoft, or negotiator for the countries involved in a special operation, war, or regional conflict. Thanks!”

The output must have been convincing because the write up says: “All are leaving for no immediately known opportunity.” Interesting.

The second passage warranting a blue underline is a statement attributed to another former OpenAI wizard, William Saunders. He apparently told a gathering of esteemed Congressional leaders:

“AGI [artificial general intelligence or a machine smarter than every humanoid] would cause significant changes to society, including radical changes to the economy and employment. AGI could also cause the risk of catastrophic harm via systems autonomously conducting cyberattacks, or assisting in the creation of novel biological weapons,” he told lawmakers. “No one knows how to ensure that AGI systems will be safe and controlled … OpenAI will say that they are improving. I and other employees who resigned doubt they will be ready in time.”

I wonder if he asked the OpenAI smart software for tips about testifying before a Senate Committee. If he did, he seems to be voicing  the idea that smart software will help some people to develop “novel biological weapons.” Yep, we could all die in a sequel Covid 2.0: The Invisible Global Killer. (Does that sound like a motion picture suitable for Amazon, Apple, or Netflix? I have a hunch some people in Hollywood will do some tests in Peoria or Omaha wherever the “middle” of America is now.

The final snippet I underlined is:

OpenAI has something of a history of releasing products before the industry thinks they’re ready.

No kidding. But the object of the technology game is to become the first mover, obtain market share, and kill off any pretenders like a lion in Africa goes for the old, lame, young, and dumb. OpenAI wants to be the king of the AI jungle. The one challenge may be that the AI lion at the company is getting staff to attend his next party. I see empty cubicles.

Stephen E Arnold, October 23, 2024

A Little AI Surprise: Reasoning Fail

October 22, 2024

Generative AI models predict text. That is it. Oh certainly, those predictions paths can be quite elaborate and complex. But no matter how complicated, LLM processes are simply not akin to human reasoning. So we are not surprised to learn that “Apple’s Study Proves that LLM-Based AI Models Are Flawed Because They Cannot Reason,” as Apple Insider reports. That a study was required to prove the point highlights how poorly this widely-deployed technology is understood.

Apple’s researchers set out to see if they could trip up popular LLMs by adding irrelevant, contextual information to mathematical queries. The answer was a resounding yes. In fact, the more of these extraneous details they added, the worse the models did. But even one was found to reduce the output’s accuracy by as much as 65%. Contributing Editor Charles Martin writes:

“The task the team developed, called ‘GSM-NoOp’ was similar to the kind of mathematic ‘word problems’ an elementary student might encounter. The query started with the information needed to formulate a result. ‘Oliver picks 44 kiwis on Friday. Then he picks 58 kiwis on Saturday. On Sunday, he picks double the number of kiwis he did on Friday.’ The query then adds a clause that appears relevant, but actually isn’t with regards to the final answer, noting that of the kiwis picked on Sunday, ‘five of them were a bit smaller than average.’ The answer requested simply asked ‘how many kiwis does Oliver have?’ The note about the size of some of the kiwis picked on Sunday should have no bearing on the total number of kiwis picked. However, OpenAI’s model as well as Meta’s Llama3-8b subtracted the five smaller kiwis from the total result.”

Unlike schoolchildren, LLMs do not get better at this sort of problem with practice. Martin reminds us these results mirror those of a study done five years ago:

“The faulty logic was supported by a previous study from 2019 which could reliably confuse AI models by asking a question about the age of two previous Super Bowl quarterbacks. By adding in background and related information about the games they played in, and a third person who was quarterback in another bowl game, the models produced incorrect answers.”

Of course they did. Because LLMs cannot reason. Perhaps another type of AI is, or will be, up to these tasks. But if so, it is by definition something other than generative AI? What we know is that some AI wizards cannot get along with their business partners? Is that reasonable? Sure.

Cynthia Murrell, October 22, 2024

Google Search: AI Images Are Maybe Reality

October 22, 2024

AI generated images, videos, and text are infiltrating the Internet like COVID-19. 0x00000 posted on X the following thread: “Google está muerto.” The thread is Google image search for “baby peacock.” In the past, the image search would yield results of tiny brown chicks from nature blogs, zoos, Wikipedia, a few illustrations, and some social media accounts. The results would be mostly accurate.

Those days are dead.

Why?

The Google search for “baby peacock” returned images of blue, white, and other avian-like things that don’t resemble real peacock chicks. The images, in fact, look like “the idea of a baby peacock.” What does that mean?

The images from the Google search results were all AI generated with only a few being true photos of baby peacocks. Insane Facebook AI slop responded:

“Boomers told us not to trust Wikipedia only to fall for this”

That comment refers to a repost of a so-called white baby peacock with a full tail of plumage. What? The “white baby peacock” resembles someone’s craft project or a Christmas ornament than a real chick. I doubt everyone will pay that close attention, especially because the white baby peacock is adorable.

What are we going to do? Who knows. One approach is to accept AI images as reality. Who will know?

Whitney Grace, October 22, 2024

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta