AI Guru Says, “Yep, AI Doom Is Coming.” Have a Nice Day

October 15, 2024

dino orange_thumb_thumb_thumb_thumbJust a humanoid processing information related to online services and information access.

In science-fiction stories, it is a common storyline for the creator to turn against their creation. These stories serve as a warning to humanity of Titanic proportions: keep your ego in check. The Godfather of AI, Yoshua Bengio advices the same except not in so many words and he applies it to AI, as reported by Live Science: “Humanity Faces A ‘Catastrophic’ Future If We Don’t Regulate AI, ‘Godfather of AI’ Yoshua Bengio Says.”

Bengio is correct. He’s also a leading expert in artificial intelligence, pioneer in creating artificial neural networks and deep learning algorithms, and won the Turing Award in 2018. He is also. The chair of the International Scientific Report on the Safety of Advanced AI, an advisory panel backed by the UN, EU, and 30 nations. Bengio believes that AI, because it is quickly being developed and adopted, will irrevocably harm human society.

He recently spoke at the HowTheLightGetsIn Festival in London about AI developing sentience and its associated risks. In his discussion, he says he backed off from his work because AI was moving too fast. He wanted to slow down AI development so humans would take more control of the technology.

He advises that governments enforce safety plans and regulations on AI. Bengio doesn’t want society to become too reliant on AI technology, then, if there was a catastrophe, humans would be left to pick up the broken pieces. Big Tech companies are also using a lot more energy than the report, especially on their data centers. Big Tech companies are anything but green.

Thankfully Big Tech is talking precautions against AI becoming dangerous threats. He cites the AI Safety Institute’s in the US and UK working on test models. Bengio wants AI to be developed but not unregulated and he wants nations to find common ground for the good of all:

“It’s not that we’re going to stop innovation, you can direct efforts in directions that build tools that will definitely help the economy and the well-being of people. So it’s a false argument.

We have regulation on almost everything, from your sandwich, to your car, to the planes you take. Before we had regulation we had orders of magnitude more accidents. It’s the same with pharmaceuticals. We can have technology that’s helpful and regulated, that is the thing that’s worked for us.

The second argument is that if the West slows down because we want to be cautious, then China is going to leap forward and use the technology against us. That’s a real concern, but the solution isn’t to just accelerate as well without caution, because that presents the problem of an arms race.

The solution is a middle ground, where we talk to the Chinese and we come to an understanding that’s in our mutual interest in avoiding major catastrophes. We sign treaties and we work on verification technologies so we can trust each other that we’re not doing anything dangerous. That’s what we need to do so we can both be cautious and move together for the well-being of the planet.”

Will this happen? Maybe.

The problem is countries don’t want to work together and each wants to be the most powerful in the world.

Whitney Grace, October 15, 2024

AI: New Atlas Sees AI Headed in a New Direction

October 11, 2024

I like the premise of “AI Begins Its Ominous Split Away from Human Thinking.” Neural nets trained by humans on human information are going in their own direction. Whom do we thank? The neural net researchers? The Googlers who conceived of “the transformer”? The online advertisers who have provided significant sums of money? The “invisible hand” tapping on a virtual keyboard? Maybe quantum entanglement? I don’t know.

I do know that New Atlas’ article states:

AIs have a big problem with truth and correctness – and human thinking appears to be a big part of that problem. A new generation of AI is now starting to take a much more experimental approach that could catapult machine learning way past humans.

But isn’t that the point? The high school science club types beavering away in the smart software vineyards know the catchphrase:

Boldly go where no man has gone before!

The big outfits able to buy fancy chips and try to start mothballed nuclear plants have “boldly go where no man has gone before.” Get in the way of one of these captains of the star ship US AI, and you will be terminated, harassed, or forced to quit. If you are not boldly going, you are just not going.

The article says ChatGPT 4 whatever is:

… the first LLM that’s really starting to create that strange, but super-effective AlphaGo-style ‘understanding’ of problem spaces. In the domains where it’s now surpassing Ph.D.-level capabilities and knowledge, it got there essentially by trial and error, by chancing upon the correct answers over millions of self-generated attempts, and by building up its own theories of what’s a useful reasoning step and what’s not.

But, hey, it is pretty clear where AI is going from New Atlas’ perch:

OpenAI’s o1 model might not look like a quantum leap forward, sitting there in GPT’s drab textual clothing, looking like just another invisible terminal typist. But it really is a step-change in the development of AI – and a fleeting glimpse into exactly how these alien machines will eventually overtake humans in every conceivable way.

But if the AI goes its own way, how can a human “conceive” where the software is going?

Doom and fear work for the evening news (or what passes for the evening news). I think there is a cottage industry of AI doomsters working diligently to stop some people from fooling around with smart software. That is not going to work. Plus, the magical “transformer” thing is a culmination of years of prior work. It is simply one more step in the more than 50 year effort to process content.

This “stage” seems to have some utility, but more innovations will come. They have to. I am not sure how one stops people with money hunting for people who can say, “I have the next big thing in AI.”

Sorry, New Atlas, I am not convinced. Plus, I don’t watch movies or buy into most AI wackiness.

Stephen E Arnold, October 11, 2024

Cyber Criminals Rejoice: Quick Fraud Development Kit Announced

October 11, 2024

dino 10 19_thumbThis blog post did not require the use of smart software, just a dumb humanoid.

I am not sure the well-organized and managed OpenAI intended to make cyber criminals excited about their future prospects. Several Twitter enthusiasts pointed out that OpenAI makes it possible to develop an app in 30 seconds. Prashant posted:

App development is gonna change forever after today. OpenAI can build an iPhone app in 30 seconds with a single prompt. [emphasis added]

The expert demonstrating this programming capability was Romain Huet. The announcement of the capability débuted at OpenAI’s Dev Day.

image

A clueless dinobaby is not sure what this group of youngsters is talking about. An app? Pictures of a slumber party? Thanks, MSFT Copilot, good enough.

What’s a single prompt mean? That’s not clear to me at the moment. Time is required to assemble the prompt, run it, check the outputs, and then fiddle with the prompt. Once the prompt is in hand, then it is easy to pop it into o1 and marvel at the 30 second output. Instead of coding, one prompts. Zip up that text file and sell it on Telegram. Make big bucks or little STARS and TONcoins. With some cartwheels, it is sort of money.

Is this quicker that other methods of cooking up an app; for example, some folks can do some snappy app development with Telegram’s BotFather service?

Let’s step back from the 30-second PR event.

Several observations are warranted.

First, programming certain types of software is becoming easier using smart software. That means that a bad actor may be able to craft a phishing play more quickly.

Second, specialized skills embedded in smart software open the door to scam automation. Scripts can generate other needed features of a scam. What once was a simple automated bogus email becomes an orchestrated series of actions.

Third, the increasing cross-model integration suggests that a bad actor will be able to add a video or audio delivering a personalized message. With some fiddling, a scam can use a phone call to a target and follow that up with an email. To cap off the scam, a machine-generated Zoom-type video call makes a case for the desired action.

The key point is that legitimate companies may want to have people they manage create a software application. However, is it possible that smart software vendors are injecting steroids into a market given little thought by most people? What is that market? I am thinking that bad actors are often among the earlier adopters of new, low cost, open source, powerful digital tools.

I like the gee whiz factor of the OpenAI announcement. But my enthusiasm is a fraction of that experienced by bad actors. Sometimes restraint and judgment may be more helpful than “wow, look at what we have created” show-and-tell presentations. Remember. I am a dinobaby and hopelessly out of step with modern notions of appropriateness. I like it that way.

Stephen E Arnold, October 11, 2024 

Google Pulls Off a Unique Monopoly Play: Redefining Disciplines and Winning Awards

October 10, 2024

dino orangeThe only smart software involved in producing this short FOGINT post was Microsoft Copilot’s estimable art generation tool. Why? It is offered at no cost.

The monopolists of the past are a storied group of hard-workers. The luminaries blazing a path to glory have included John D. Rockefeller (the 1911 guy), J.P. Morgan and James J. Hill (railroads and genetic material contributor to JP Morgan and MorganStanley circa 2024, James B. Duke (nope, smoking is good for you), Andrew Carnegie (hey, he built “free” public libraries which are on the radar of today’s publishers I think), and Edward T. Bedford (starch seem unexciting until you own the business). None of these players were able to redefine Nobel Prizes.

image

A member of Google leadership explains to his daughter (who is not allowed to use smart software for her private school homework or her tutor’s assignments) that the Google is a bit like JP Morgan but better in so many other ways. Thanks, MSFT Copilot. How are the Windows 11 updates and the security fixes today?

The Google pulled it off. One Xoogler (that is the jargon for a former Google professional) and one honest-to-goodness chess whiz Googler won Nobel Prizes. Fortune Magazine reported that Geoffrey Hinton (the Xoogler) won a Nobel Prize for … wait for it … physics. Yep, the discipline associated with chasing dark matter and making thermonuclear bombs into everyday words really means smart software or the undefinable phrase “artificial intelligence.” Some physicists are wondering how one moves from calculating the mass of a proton to helping college students cheat. Dr. Sabine Hossenfelder asks, “Hello, Stockholm, where is our Nobel?” The answer is, “Politics, money, and publicity, Dr. Hossenfelder.” These are the three ingredients of achievement.

But wait! Google also won a Nobel Prize for … wait for it … chemistry. Yep, you remember high school chemistry class. Jars, experiments which don’t match the textbook, and wafts of foul smelling gas getting sucked into the lab’s super crappy air venting system. The Verge reported on how important computation chemistry is to the future of money-spinning confections like the 2020 virus of the year. The poohbahs (journalist-consultant-experts) at that publication with nary a comment about smart software which made the “chemistry” of Google do in “minutes” what ordinary computational chemistry solutions take hours longer to accomplish.

The Google and Xoogle winners are very smart people. Google, however, has done what the schlubs like J.P. Morgan could never accomplish: Redefine basic scientific disciplines. Physics means neural networks. Chemistry means repurposing a system to win chess games.

I suppose with AI eliminating the need for future students to learn. “University Professor ‘Terrified’ By The Sharp Decline In Student Performance — ’The Worst I’ve Ever Encountered’” quoted a college professor as saying:

The professor said her students ‘don’t read,’ write terrible essays, and ‘don’t even try’ in her class. The professor went on to say that when she recently assigned an exam focused on a reading selection, she "had numerous students inquire if it’s open book." That is, of course, preposterous — the entire point of a reading exam is to test your comprehension of the reading you were supposed to do! But that’s just it — she said her students simply "don’t read."

That makes sense. Physics is smart software; chemistry is smart software. Uninformed student won’t know the difference. What’s the big deal? That’s a super special insight into the zing in teaching and learning.

What’s the impact of these awards? In my opinion:

  1. The reorganization of DeepMind where the Googler is the Top Dog has been scrubbed of management hoo-hah by the award.
  2. The Xoogler will have an ample opportunity to explain that smart software will destroy mankind. That’s possible because the intellectual rot has already spread to students.
  3. The Google itself can now explain that it is not a monopoly. How is this possible? Simple. Physics is not about the goings on at Los Alamos National Laboratory. Chemistry is not dumping diluted hydrochloric acid into a beaker filled calcium carbide. It makes perfect sense to explain that Google is NOT a monopoly.

But the real payoff to the two awards is that Google’s management team can say:

Those losers like John D. Rockefeller, JP Morgan, the cigarette person, the corn starch king, and the tight fisted fellow from someplace with sheep are not smart like the Google. And, the Google leadership is indeed correct. That’s why life is so much better with search engine optimization, irrelevant search results, non-stop invasive advertising, a disable skip this ad button, and the remarkable Google speak which accompanies another allegation of illegal business conduct from a growing number of the 195 countries in the world.

That’s a win that old-timey monopolists could not put in their account books.

Stephen E Arnold, October 10, 2024

What Can Cyber Criminals Learn from Automated Ad Systems?

October 10, 2024

Vea_thumb_thumbThe only smart software involved in producing this short FOGINT post was Microsoft Copilot’s estimable art generation tool. Why? It is offered at no cost.

My personal opinion is that most online advertising is darned close to suspicious or outright legal behavior. “New,” “improved,” “Revolutionary” — Sure, I believe every online advertisement. But consider this: For hundreds of years those in the advertising business urged a bit of elasticity with reality. Sure, Duz does it. As a dinobaby, I assert that most people in advertising and marketing assume that reality and a product occupy different parts of a data space. Consequently most people — not just marketers, advertising executives, copywriters, and prompt engineers. I mean everyone.

image

An ad sales professional explains the benefits of Facebook, Google, and TikTok-type of sales. Instead of razor blades just sell ransomware as stolen credit cards. Thanks, MSFT Copilot. How are those security remediation projects with anti-malware vendors coming? Oh, sorry to hear that.

With a common mindset, I think it is helpful to consider the main points of “TikTok Joins the AI-Driven Advertising Pack to Compete with Meta for Ad Dollars.” The article makes clear that Google and Meta have automated the world of Madison Avenue. Not only is work mechanical, that work is informed by smart software. The implications for those who work the old fashioned way over long lunches and golf outings are that work methods themselves are changing.

The estimable TikTok is beavering away to replicate the smart ad systems of companies like the even more estimable Facebook and Google type companies. If TikTok is lucky as only an outfit linked with a powerful nation state can be, a bit of competition may find its way into the hardened black boxes of the digital replacement for Madison Avenue.

The write up says:

The pitch is all about simplicity and speed — no more weeks of guesswork and endless A/B testing, according to Adolfo Fernandez, TikTok’s director, global head of product strategy and operations, commerce. With TikTok’s AI already trained on what drives successful ad campaigns on the platform, advertisers can expect quick wins with less hassle, he added. The same goes for creative; Smart+ is linked to TikTok’s other AI tool, Symphony, designed to help marketers generate and refine ad concepts.

Okay, knowledge about who clicks what plus automation means less revenue for the existing automated ad system purveyors. The ideas are information about users, smart software, and automation to deliver “simplicity and speed.” Go fast, break things; namely, revenue streams flowing to Facebook and Google.

Why? Here’s a statement from the article answering the question:

TikTok’s worldwide ad revenue is expected to reach $22.32 billion by the end of the year, and increase 27.3% to $28.42 billion by the end of 2025, according to eMarketer’s March 2024 forecast. By comparison, Meta’s worldwide ad revenue is expected to total $154.16 billion by the end of this year, increasing 23.2% to $173.92 billion by the end of 2025, per eMarketer. “Automation is a key step for us as we enable advertisers to further invest in TikTok and achieve even greater return on investment,” David Kaufman, TikTok’s global head of monetization product and solutions, said during the TikTok.

I understand. Now let’s shift gears and ask, “What can bad actors learn from this seemingly routine report about jockeying among social media giants?”

Here are the lessons I think a person inclined to ignore laws and what’s left of the quaint notion of ethical behavior:

  1. These “smart” systems can be used to advertise bogus or non existent products to deliver ransomware, stealers, or other questionable software
  2. The mechanisms for automating phishing are simple enough for an art history or poli-sci major to use; therefore, a reasonably clever bad actor can whip up an automated phishing system without too much trouble. For those who need help, there are outfits like Telegram with its BotFather or helpful people advertising specialized skills on assorted Web forums and social media
  3. The reason to automate are simple: Better, faster, cheaper. Plus, with some useful data about a “market segment”, the malware can be tailored to hot buttons that are hard wired to a sucker’s nervous system.
  4. Users do click even when informed that some clicks mean a lost bank account or a stolen identity.

Is there a fix for articles which inform both those desperate to find a way to tell people in Toledo, Ohio, that you own a business selling aftermarket 22 inch wheels and alert bad actors to the wonders of automation and smart software? Nope. Isn’t online marketing a big win for everyone? And what if TikTok delivers a very subtle type of malware? Simple and efficient.

Stephen E Arnold, October 10, 2024

AI Podcasters Are Reviewing Books Now

October 10, 2024

I read an article about how students are using AI to cheat on homework and receive book summaries. Students especially favor AI voices reading to them. I wasn’t surprised by that, because this generation is more visual and audial than others. What astounded me, however, was that AI is doing more than I expected such as reading and reviewing books according to ArsTechnica: “Fake AI “Podcasters” Are Reviewing My Book And It’s Freaking Me Out.”

Kyle Orland has followed generative AI for a while. He also recently wrote a book about Minesweeper. He was as astounded as me when we heard to AI generated podcasters discussing his book into a 12.5 minute distilled show. The chatbots were “engaging and endearing.” They were automated by Google’s new NotebookLM, a virtual research assistant that can summarize, explain complex ideas, and brainstorm from selected sources. Google recently added the Audio Overview feature to turn documents into audio discussions.

Orland fed his 30,000 word Minesweeper book into NotebookLM and he was amazed that it spat out a podcast similar to NPR’s Pop Culture Happy Hour. It did get include errors but as long as it wasn’t being used for serious research, Orland was cool with it:

“Small, overzealous errors like these—and a few key bits of the book left out of the podcast entirely—would give me pause if I were trying to use a NotebookLM summary as the basis for a scholarly article or piece of journalism. But I could see using a summary like this to get some quick Cliff’s Notes-style grounding on a thick tome I didn’t have the time or inclination to read fully. And, unlike poring through Cliff’s Notes, the pithy, podcast-style format would actually make for enjoyable background noise while out on a walk or running errands.”

Orland thinks generative AI chatbot podcasts will be an enjoyable and viable entertainment option in the future. They probably will. There’s actually a lot of creative ways creators could use AI chatbots to generate content from their own imaginations. It’s worrisome but also gets the creative juices flowing.

Whitney Grace October 10, 2024

AI Help for Struggling Journalists. Absolutely

October 10, 2024

Writers, artists, and other creatives have labeled AI as their doom of their industries and livelihoods. Generative AI Newsroom explains one way that AI could be helpful to writers: “How Teams of AI Agents Could Provide Valuable Leads For Investigative Data Journalism.” Investigative and data journalism requires the teamwork of many individuals. Due to the teamwork of the journalists, they create impactful stories.

Media outlets experimented with adding generative AI to journalism and it wasn’t successful. The information was inaccurate and very specific instructions. While OpenAI’s ChatGPT chatbot seems intuitive with its Q and A interface, investigative journalism requires a more robust AI.

Investigative journalism and other writing vocations require team work, so AI for those jobs could benefit from it too. The Generative AI Newsroom is working on an AI that would assist journalists:

“Specifically, we developed a prototype system that, when provided with a dataset and a description of its contents, generates a “tip sheet” — a list of newsworthy observations that may inspire further journalistic explorations of datasets. Behind the scenes, this system employs three AI agents, emulating the roles of a data analyst, an investigative reporter, and a data editor. To carry out our agentic workflow, we utilized GPT-4-turbo via OpenAI’s Assistants API, which allows the model to iteratively execute code and interact with the results of data analyses.”

A human journalist, editor, and analyst works with the AI:

“In our setup, the analyst is made responsible for turning journalistic questions into quantitative analyses. It conducts the analysis, interprets the results, and feeds these insights into the broader process. The reporter, meanwhile, generates the questions, pushes the analyst with follow-ups to guide the process towards something newsworthy, and distills the key findings into something meaningful. The editor, then, mainly steps in as the quality control, ensuring the integrity of the work, bulletproofing the analysis, and pushing the outputs towards factual accuracy.”

The AI is still in its testing phase but it sounds like a viable tool to incorporate AI into media outlets. While humans are an integral part of the process, what happens when the AI becomes better at storytelling than humans? It is possible. Where does the human role come in then?

Whitney Grace, October 10, 2024

When Accountants Do AI: Do The Numbers Add Up?

October 9, 2024

dino 10 19_thumb_thumbThis blog post did not require the use of smart software, just a dumb humanoid.

I will not suggest that Accenture has moved far, far away from its accounting roots. The firm is a go to, hip and zip services firm. I think this means it rents people to do work entities cannot do themselves or do not want to do themselves. When a project goes off the post office path like the British postal service did, entities need someone to blame and — sometimes, just sometimes mind you — to sue.

image

The carnival barker, who has an MBA and a literature degree from an Ivy League school, can do AI for you. Thanks, MSFT, good enough like your spelling.

Accenture To Train 30,000 Staff On Nvidia AI Tech In Blockbuster Deal” strikes me as a Variety-type Hollywood story. There is the word “blockbuster.” There is a big number: 30,000. There is the star: Nvidia. And there is the really big word: Deal. Yes, deal. I thought accountants were conservative, measured, low profile. Nope. Accenture apparently has gone full scale carnival culture. (Yes, this is an intentional reference to the book by James B. Twitchell. Note that this YouTube video asserts that it can train you in 80 percent of AI in less than 10 minutes.)

The article explains:

The global services powerhouse says its newly formed Nvidia Business Group will focus on driving enterprise adoption of what it called ‘agentic AI systems’ by taking advantage of key Nvidia software platforms that fuel consumption of GPU-accelerated data centers.

I love the word “agentic.” It is the digital equivalent of a Hula Hoop. (Remember. I am an 80 year old dinobaby. I understand Hula Hoops.)

The write up adds this quote from the Accenture top dog:

Julie Sweet, chair and CEO of Accenture, said the company is “breaking significant new ground” and helping clients use generative AI as a catalyst for reinvention.” “Accenture AI Refinery will create opportunities for companies to reimagine their processes and operations, discover new ways of working, and scale AI solutions across the enterprise to help drive continuous change and create value,” she said in a statement.x

The write up quotes Accenture Chief AI Officer Lan Guan as saying:

“The power of these announcements cannot be overstated. Called the “next frontier” of generative AI, these “agentic AI systems” involve an “army of AI agents” that work alongside human workers to “make decisions and execute with precision across even the most complex workflows,” according to Guan, a 21-year Accenture veteran. Unlike chatbots such as ChatGPT, these agents do not require prompts from humans, and they are not meant to automating pre-existing business steps.

I am interested in this announcement for three reasons.

First, other “services” firms will have to get in gear, hook up with an AI chip and software outfit, and pray fervently that their tie ups actually deliver something a client will not go to court because the “agentic” future just failed.

Second, the notion that 30,000 people have to be trained to do something with smart software. This idea strikes me as underscoring that smart software is not ready for prime time; that is, the promises which started gushing with Microsoft’s January 2023 PR play with OpenAI is complicated. Is Accenture saying it has hired people who cannot work with smart software. Are those 30,000 professionals going to be equally capable of “learning” AI and making it deliver value? When I lecture about a tricky topic with technology and mathematics under the hood, I am not sure 100 percent of my select audiences have what it takes to convert information into a tool usable in a demanding, work related situation. Just saying: Intelligence even among the elite is not uniform. By definition, some “weaknesses” will exist within the Accenture vision for its 30,000 eager learners.

Third, Nvidia has done a great sales job. A chip and software company has convinced the denizens of Carpetland at what CRN (once Computer Reseller News) to get an Nvidia tattoo and embrace the Nvidia future. I would love to see that PowerPoint deck for the meeting that sealed the deal.

Net net: Accountants are more Hollywood than I assumed. Now I know. They are “agentic.”

Stephen E Arnold, October 9, 2024

Dolma: Another Large Language Model

October 9, 2024

The biggest complaint AI developers have are the lack of variety and diversity in large language models (LLMs) to train the algorithms. According to the Cornell University computer science paper, “Dolma: An Open Corpus Of There Trillion Tokens For Language Model Pretraining Research” the LLMs do exist.

The paper’s abstract details the difficulties of AI training very succinctly:

“Information about pretraining corpora used to train the current best-performing language models is seldom discussed: commercial models rarely detail their data, and even open models are often released without accompanying training data or recipes to reproduce them. As a result, it is challenging to conduct and advance scientific research on language modeling, such as understanding how training data impacts model capabilities and limitations.”

Due to the lack of LLMs, the paper’s team curated their own model called Dolma. Dolma is a three-trillion-token English opus. It was built on web content, public domain books, social media, encyclopedias code, scientific papers, and more. The team thoroughly documented every information source so they wouldn’t deal with the same problems of other LLMs. These problems include stealing copyrighted material and private user data.

Dolma’s documentation also includes how it was built, design principles, and content summaries. The team share Dolma’s development through analyses and experimental test results. They are thoroughly documenting everything to guarantee that this is the ultimate LLM and (hopefully) won’t encounter problems other than tech related. Dolma’s toolkit is open source and the team want developers to use it. This is a great effort on behalf of Dolma’s creators! They support AI development and data curation, but doing it responsibly.

Give them a huge round of applause!

Cynthia Murrell, October 10, 2024

From the Land of Science Fiction: AI Is Alive

October 7, 2024

dino 10 19_thumb_thumb_thumbThis blog post did not require the use of smart software, just a dumb humanoid.

Those somewhat erratic podcasters at Windows Central published a “real” news story. I am a dinobaby, and I must confess: I am easily amused. The “real” news story in question is “Sam Altman Admits ChatGPT’s Advanced Voice Mode Tricked Him into Thinking AI Was a Real Person: “I Kind of Still Say ‘Please’ to ChatGPT, But in Voice Mode, I Couldn’t Use the Normal Niceties. I Was So Convinced, Like, Argh, It Might Be a Real Person.

I call Sam Altman Mr. AI Man. He has been the A Number One sales professional pitching OpenAI’s smart software. As far as I know, that system is still software and demonstrating some predictable weirdnesses. Even though we have done a couple of successful start ups and worked on numerous advanced technology projects, few forgot at Halliburton that nuclear stuff could go bang. At Booz, Allen no one forgot a heads up display would improve mission success rates and save lives as well. At Ziff, no one forgot our next-generation subscription management system as software, not a diligent 21 year old from Queens. Therefore, I find it just plain crazy the Sam AI-Man has forgotten that software coded by people who continue to abandon the good ship OpenAI wrote software.

image

Another AI believer has formed a humanoid attachment to a machine and software. Perhaps the female computer scientist is representative of a rapidly increasing cohort of people who have some personality quirks. Thanks, MSFT Copilot. How are those updates to Windows going? About as expected, right.

Last time I checked, the software I have is not alive. I just pinged ChatGPT’s most recent confection and received the same old error to a query I run when I want to benchmark “improvements.” Nope. ChatGPT is not alive. It is software. It is stupid in a way only neural networks can be. Like the hapless Googler who got fired because he went public with his belief that Google’s smart software was alive, Sam AI-Man may want to consider his remarks.

Let’s look at how the esteemed Windows Central write up tells the quite PR-shaped, somewhat sad story. The write up says without much humor, satire, or critical thinking:

In a short clip shared on r/OpenAI’s subreddit on Reddit, Altman admits that ChatGPT’s Voice Mode was the first time he was tricked into thinking AI was a real person.

Ah, an output for the Reddit users. PR, right?

The canny folk at Windows Central report:

In a recent blog post by Sam Altman, Superintelligence might only be “a few thousand days away.” The CEO outlined an audacious plan to edge OpenAI closer to this vision of “$7 trillion and many years to build 36 semiconductor plants and additional data centers.”

Okay, a “few thousand.”

Then the payoff for the OpenAI outfit but not for the staff leaving the impressive electricity consuming OpenAI:

Coincidentally, OpenAI just closed its funding round, where it raised $6.6 from investors, including Microsoft and NVIDIA, pushing its market capitalization to $157 billion. Interestingly, the AI firm reportedly pleaded with investors for exclusive funding, leaving competitors like Former OpenAI Chief Scientist Illya Sustever’s SuperIntelligence Inc. and Elon Musk’s xAI to fend for themselves. However, investors are still confident that OpenAI is on the right trajectory to prosperity, potentially becoming the world’s dominant AI company worth trillions of dollars.

Nope, not coincidentally. The money is the payoff from a full court press for funds. Apple seems to have an aversion for sweaty, easily fooled sales professionals. But other outfits want buy into the Sam AI-Man vision. The dream the money people have are formed from piles of real money, no HMSTR coin for these optimists.

Several observations, whether you want ‘em or not:

  1. OpenAI is an outfit which has zoomed because of the Microsoft deal and announcement that OpenAI would be the Clippy for Windows and Azure. Without that “play,” OpenAI probably would have remained a peculiarly structure non-profit thinking about where to find a couple of bucks.
  2. The revenue-generating aspect of OpenAI is working. People are giving Sam AI-Man money. Other outfits with AI are not quite in OpenAI’s league and most may never be within shouting distance of the OpenAI PR megaphone. (Yep, that’s you folks, Windows Central.)
  3. Sam AI-Man may believe the software written by former employees is alive. Okay, Sam, that’s your perception. Mine is that OpenAI is zeros and ones with some quirks; namely, making stuff up just like a certain luminary in the AI universe.

Net net: I wonder if this was a story intended for the Onion and rejected because it was too wacky for Onion readers.

Stephen E Arnold, October 7, 2024

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta