OpenAI: What about Uncertainty and Google DeepMind?

November 20, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

A large number of write ups about Microsoft and its response to the OpenAI management move populate my inbox this morning (Monday, November 20, 2023).

To give you a sense of the number of poohbahs, mavens, and “real” journalists covering Microsoft’s hiring of Sam (AI-Man) Altman, I offer this screen shot of Techmeme.com taken at 1100 am US Eastern time:

image

A single screenshot cannot do justice to  the digital bloviating on this subject as well as related matters.

I did a quick scan because I simply don’t have the time at age 79 to read every item in this single headline service. Therefore, I admit that others may have thought about the impact of the Steve Jobs’s like termination, the revolt of some AI wizards, and Microsoft’s creating a new “company” and hiring Sam AI-Man and a pride of his cohorts in the span of 72 hours (give or take time for biobreaks).

In this short essay, I want to hypothesize about how the news has been received by that merry band of online advertising professionals.

To begin, I want to suggest that the turmoil about who is on first at OpenAI sent a low voltage signal through the collective body of the Google. Frisson resulted. Uncertainty and opportunity appeared together like the beloved Scylla and Charybdis, the old pals of Ulysses. The Google found its right and left Brainiac hemispheres considering that OpenAI would experience a grave set back, thus clearing a path for Googzilla alone. Then one of the Brainiac hemisphere reconsidered and perceive a grave threat from the split. In short, the Google tipped into its zone of uncertainty.

image

A group of online advertising experts meet to consider the news that Microsoft has hired Sam Altman. The group looks unhappy. Uncertainty is an unpleasant factor in some business decisions. Thanks Microsoft Copilot, you captured the spirit of how some Silicon Valley wizards are reacting to the OpenAI turmoil because Microsoft used the OpenAI termination of Sam Altman as a way to gain the upper hand in the cloud and enterprise app AI sector.

Then the matter appeared to shift back to the pre-termination announcement. The co-founder of OpenAI gained more information about the number of OpenAI employees who were planning to quit or, even worse, start posting on Instagram, WhatsApp, and TikTok (X.com is no longer considered the go-to place by the in crowd.

The most interesting development was not that Sam AI-Man would return to the welcoming arms of Open AI. No, Sam AI-Man and another senior executive were going to hook up with the geniuses of Redmond. A new company would be formed with Sam AI-Man in charge.

As these actions unfolded, the Googlers sank under a heavy cloud of uncertainty. What if the Softies could use Google’s own open source methods, integrate rumored Microsoft-developed AI capabilities, and make good on Sam AI-Man’s vision of an AI application store?

The Googlers found themselves reading every “real news” item about the trajectory of Sam AI-Man and Microsoft’s new AI unit. The uncertainty has morphed into another January 2023 Davos moment. Here’s my take as of 230 pm US Eastern, November 20, 2023:

  1. The Google faces a significant threat when it comes to enterprise AI apps. Microsoft has a lock on law firms, the government, and a number of industry sectors. Google has a presence, but when it comes to go-to apps, Microsoft is the Big Dog. More and better AI raises the specter of Microsoft putting an effective laser defense behinds its existing enterprise moat.
  2. Microsoft can push its AI functionality as the Azure difference. Furthermore, whether Google or Amazon for that matter assert their cloud AI is better, Microsoft can argue, “We’re better because we have Sam AI-Man.” That is a compelling argument for government and enterprise customers who cannot imagine work without Excel and PowerPoint. Put more AI in those apps, and existing customers will resist blandishments from other cloud providers.
  3. Google now faces an interesting problem: It’s own open source code could be converted into a death ray, enhanced by Sam AI-Man, and directed at the Google. The irony of Googzilla having its left claw vaporized by its own technology is going to be more painful than Satya Nadella rolling out another Davos “we’re doing AI” announcement.

Net net: The OpenAI machinations are interesting to many companies. To the Google, the OpenAI event and the Microsoft response is like an unsuspecting person getting zapped by Nikola Tesla’s coil. Google’s mastery of high school science club management techniques will now dig into the heart of its DeepMind.

Stephen E Arnold, November 20, 2023

Google Pulls Out a Rhetorical Method to Try to Win the AI Spoils

November 20, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

In high school in 1958, our debate team coach yapped about “framing.” The idea was new to me, and Kenneth Camp pounded it into our debate’s collective “head” for the four years of my high school tenure. Not surprisingly, when I read “Google DeepMind Wants to Define What Counts As Artificial General Intelligence” I jumped back in time 65 years (!) to Mr. Camp’s explanation of framing and how one can control the course of a debate with the technique.

Google should not have to use a rhetorical trick to make its case as the quantum wizard of online advertising and universal greatness. With its search and retrieval system, the company can boost, shape, and refine any message it wants. If those methods fall short, the company can slap on a “filter” or “change its rules” and deprecate certain Web sites and their messages.

But Google values academia, even if the university is one that welcomed a certain Jeffrey Epstein into its fold. (Do you remember the remarkable Jeffrey Epstein. Some of those who he touched do I believe.) The estimable Google is the subject of referenced article in the MIT-linked Technology Review.

From my point of view, the big idea is the write up is, and I quote:

To come up with the new definition, the Google DeepMind team started with prominent existing definitions of AGI and drew out what they believe to be their essential common features. The team also outlines five ascending levels of AGI: emerging (which in their view includes cutting-edge chatbots like ChatGPT and Bard), competent, expert, virtuoso, and superhuman (performing a wide range of tasks better than all humans, including tasks humans cannot do at all, such as decoding other people’s thoughts, predicting future events, and talking to animals). They note that no level beyond emerging AGI has been achieved.

Shades of high school debate practice and the chestnuts scattered about the rhetorical camp fire as John Schunk, Jimmy Bond, and a few others (including the young dinobaby me) learned how one can set up a frame, populate the frame with logic and facts supporting the frame, and then point out during rebuttal that our esteemed opponents were not able to dent our well formed argumentative frame.

Is Google the optimal source for a definition of artificial general intelligence, something which does not yet exist. Is Google’s definition more useful than a science fiction writer’s or a scene from a Hollywood film?

Even the trusted online source points out:

One question the researchers don’t address in their discussion of _what_ AGI is, is _why_ we should build it. Some computer scientists, such as Timnit Gebru, founder of the Distributed AI Research Institute, have argued that the whole endeavor is weird. In a talk in April on what she sees as the false (even dangerous) promise of utopia through AGI, Gebru noted that the hypothetical technology “sounds like an unscoped system with the apparent goal of trying to do everything for everyone under any environment.” Most engineering projects have well-scoped goals. The mission to build AGI does not. Even Google DeepMind’s definitions allow for AGI that is indefinitely broad and indefinitely smart. “Don’t attempt to build a god,” Gebru said.

I am certain it is an oversight, but the telling comment comes from an individual who may have spoken out about Google’s systems and methods for smart software.

image

Mr. Camp, the high school debate coach, explains how a rhetorical trope can gut even those brilliant debaters from other universities. (Yes, Dartmouth, I am still thinking of you.) Google must have had a “coach” skilled in the power of framing. The company is making a bold move to define that which does not yet exist and something whose functionality is unknown. Such is the expertise of the Google. Thanks, Bing. I find your use of people of color interesting. Is this a pre-Sam ouster or a post-Sam ouster function?

What do we learn from the write up? In my view of the AI landscape, we are given some insight into Google’s belief that its rhetorical trope packaged as content marketing within an academic-type publication will lend credence to the company’s push to generate more advertising revenue. You may ask, “But won’t Google make oodles of money from smart software?” I concede that it will. However, the big bucks for the Google come from those willing to pay for eyeballs. And that, dear reader, translates to advertising.

Stephen E Arnold, November 20, 2023

The Confusion about Social Media, Online, and TikToking the Day Away

November 20, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

This dinobaby is not into social media. Those who are present interesting, often orthogonal views of the likes of Facebook, Twitter, and Telegram public groups.

Concerning: Excessive Screen Time Linked to Lower Cognitive Function” reports:

In a new meta-analysis of dozens of earlier studies, we’ve found a clear link between disordered screen use and lower cognitive functioning.

I knew something was making it more and more difficult for young people to make change. In a remarkable demonstration of cluelessness, my wife told me that the clerk at our local drug store did not know what a half dollar was. My wife said, “I had to wait for the manager to come and tell the clerk that it was the same as 50 pennies.” There are other clues to the deteriorating mental acuity of some individuals. Examples range from ingesting trank to driving the wrong way on an interstate highway, a practice not unknown in the Commonwealth of Kentucky.

image

The debate about social media, online content consumption, and TikTok addiction continues. I find it interesting how allegedly informed people interpret data about online differently. Don’t these people watch young people doing their jobs? Thanks, MSFT Copilot. You responded despite the Sam AI-Man Altman shock.

I understand that there are different ways to interpret data. For instance, A surprising “Feature of IQ Has Actually Improved over the Past 30 Years.” That write up asserts:

Researchers from the University of Vienna in Austria dug deep into the data from 287 previously studied samples, covering a total of 21,291 people from 32 countries aged between 7 and 72, across a period of 31 years (1990 to 2021).

Each individual had completed the universally recognized d2 Test of Attention for measuring concentration, which when taken as a whole, showed a moderate rise in concentration levels over the decades, suggesting adults are generally better able to focus compared with people more than 30 years ago.

I have observed this uplifting benefit of social media, screen time, and swiping. A recent example is that a clerk at our local organic food market was intent on watching a video on his mobile phone. Several people were talking softly as they waited for the young person to put down his phone and bag the groceries. One intrusive and bold person spoke up and said, “Young man, would you put down your phone and put the groceries in the sack?” The young cognitively improved individual ignored her. I then lumbered forward like a good dinobaby and said, “Excuse me, I think you need to do your job.” When he became aware of my standing directly in front of him, he put down his phone. What concentration!

Is social media and its trappings good or bad? Wait, I need to check my phone.

Stephen E Arnold, November 20, 2023

Sigh, More Doom and Gloom about Smart Software

November 20, 2023

green-dino_thumb_thumbThis essay is the work of a dumb humanoid. No smart software required.

Hey, the automatic popcorn function works in your microwave, right? That’s a form of smart software. But that’s okay. The embedding of AI in a personnel review is less benign. Letting AI develop a bio-weapon system is definitely bad. Sci fi or real life?

image

An AI researcher explains to her colleagues that smart software will destroy their careers, ruin their children’s lives, and destroy the known universe. The colleagues are terrified except for those consulting for firms engaged in the development and productization of AI products and services. Thanks, Microsoft Bing. You have the hair of a scientist figured out.

I read “AI Should Be Better Understood and Managed — New Research Warns.” The main idea is, according to the handy dandy summary which may have been generated by an AI system recounts the journey to this “warning”:

Artificial Intelligence (AI) and algorithms can and are being used to radicalize, polarize, and spread racism and political instability, says an academic. An expert argues that AI and algorithms are not just tools deployed by national security agencies to prevent malicious activity online, but can be contributors to polarization, radicalism and political violence — posing a threat to national security.

Who knew? I wonder if the developers in China, Iran, North Korea, and any other members of the “axis of evil” are paying attention to the threats of smart software, developing with and using the technologies, and following along with the warnings of Western Europeans? My hunch is that the answer is, “Are you kidding?”

I noted this statement in the paper:

“This lack of trust in machines, the fears associated with them, and their association with biological, nuclear and genetic threats to humankind has contributed to a desire on the part of governments and national security agencies to influence the development of the technology, to mitigate risk and (in some cases) to harness its positive potentiality,” writes Professor Burton.

I assume the author includes “all” of earth’s governments. Now that strikes me as a somewhat challenging task. Exactly what coordinating group will undertake the job? A group of academics in the EU? Some whiz kids at Google or OpenAI? How about the US Congress?

Yeah.

Gentle reader, the AI cat is out of the bag, and I am growing less responsive to the fear mongering.

Stephen E Arnold, November 20, 2023

Why Suck Up Health Care Data? Maybe for Cyber Fraud?

November 20, 2023

green-dino_thumb_thumbThis essay is the work of a dumb humanoid. No smart software required.

In the US, medical care is an adventure. Last year, my “wellness” check up required a visit to another specialist. I showed up at the appointed place on the day and time my printed form stipulated. I stood in line for 10 minutes as two “intake” professionals struggled to match those seeking examinations with the information available to the check in desk staff. The intake professional called my name and said, “You are not a female.” I said, “That’s is correct.” The intake professional replied, “We have the medical records from your primary care physician for a female named Tina.” Nice Health Insurance Portability and Accountability Act compliance, right?

image

A moose in Maine learns that its veterinary data have been compromised by bad actors, probably from a country in which the principal language is not moose grunts. With those data, the shocked moose can be located using geographic data in his health record. Plus, the moose’s credit card data is now on the loose. If the moose in Maine is scared, what about the humanoids with the fascinating nasal phonemes?

That same health care outfit reported that it was compromised and was a victim of a hacker. The health care outfit floundered around and now, months later, struggles to update prescriptions and keep appointments straight. How’s that for security? In my book, that’s about par for health care managers who [a] know zero about confidentiality requirements and [b] even less about system security. Horrified? You can read more about this one-horse travesty in “Norton Healthcare Cyber Attack Highlights Record Year for Data Breaches Nationwide.” I wonder if the grandparents of the Norton operation were participants on Major Bowes’ Amateur Hour radio show?

Norton Healthcare was a poster child for the Commonwealth of Kentucky. But the great state of Maine (yep, the one with moose, lovable black flies, and citizens who push New York real estate agents’ vehicles into bays) managed to lose the personal data for 2,192,515 people. You can read about that “minor” security glitch in the Office of the Maine Attorney General’s Data Breach Notification.

What possible use is health care data? Let me identify a handful of bad actor scenarios enabled by inept security practices. Note, please, that these are worse than being labeled a girl or failing to protect the personal information of what could be most of the humans and probably some of the moose in Maine.

  1. Identity theft. Those newborns and entries identified as deceased can be converted into some personas for a range of applications, like applying for Social Security numbers, passports, or government benefits
  2. Access to bank accounts. With a complete array of information, a bad actor can engage in a number of maneuvers designed to withdraw or transfer funds
  3. Bundle up the biological data and sell it via one of the private Telegram channels focused on such useful information. Bioweapon researchers could find some of the data fascinating.

Why am I focusing on health care data? Here are the reasons:

  1. Enforcement of existing security guidelines seems to be lax. Perhaps it is time to conduct audits and penalize those outfits which find security easy to talk about but difficult to do?
  2. Should one or more Inspector Generals’ offices conduct some data collection into the practices of state and Federal health care security professionals, their competencies, and their on-the-job performance? Some humans and probably a moose or two in Maine might find this idea timely.
  3. Should the vendors of health care security systems demonstrate to one of the numerous Federal cyber watch dog groups the efficacy of their systems and then allow one or more of the Federal agencies to probe those systems to verify that the systems do, in fact, actually work?

Without meaningful penalties for security failures, it may be easier to post health care data on a Wikipedia page and quit the crazy charade that health information is secure.

Stephen E Arnold, November 20, 2023

A TikTok Titbit

November 20, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I am not sure if the data are spot on. Nevertheless, the alleged factoid caught my attention. There might be a germ of truth in the news item. The story “TikTok Is the Career Coach of Chice for Gen Z. Is That Really a Good Idea?” My answer to the question is, “No.?”

The write up reports:

A new survey of workers aged 21 to 40 by ResumeBuilder.com found that half of Gen Zers and millennials are getting career advice off the app. Two in three users surveyed say they’re very trusting or somewhat trusting of the advice they receive. The recent survey underscores how TikTok is increasingly dominating internet services of all kinds.

To make its point the write up includes this statement:

… Another study found that 51% of Gen Z women prefer TikTok over Google for search. It’s just as popular for news and entertainment: One in six American teens watch TikTok “almost constantly,” according to a 2022 Pew Research Center survey. “We’re talking about a platform that’s shaping how a whole generation is learning to perceive the world,” Abbie Richards, a TikTok researcher, recently told the Washington Post.

Accurate? Probably close enough for horseshoes.

Stephen E Arnold, November 20, 2023

OpenAI: Permanent CEO Needed

November 17, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

My rather lame newsreader spit out an “urgent alert” for me. Like the old teletype terminal: Ding, ding, ding, and a bunch of asterisks.

Surprise. Sam AI-Man allegedly has been given the opportunity to find his future elsewhere. Let me translate blue chip consultant speak for you. The “find your future elsewhere” phrase means you have been fired, RIFed, terminated with extreme prejudice, or “there’s the door. Use it now.” The particularly connotative spin depends on the person issuing the formal statement.

image

“Keep in mind that we will call you,” says the senior member of the Board of Directors. The head of the human resources committee says, “Remember. We don’t provide a reference. Why not try the Google AI system?” Thank you, MSFT Copilot. You must have been trained on content about Mr. Ballmer’s departure.

OpenAI Fires Co-Founder and CEO Sam Altman for Lying to Company Board” states as rock solid basaltic truth:

OpenAI CEO and co-founder Sam Altman was fired for lying to the board of his company.

The good news is that a succession option, of sorts, is in place. Accordingly, OpenAI’s chief technical officer, has become the “interim CEO.” I like the “interim.” That’s solid.

For the moment, let’s assume the RIF statement is true. Furthermore, on this rainy Saturday in rural Kentucky, I shall speculate about the reasons for this announcement. Here we go:

  1. The problem is money, the lack thereof, or the impossibility of controlling the costs of the OpenAI system. Perhaps Sam AI-Man said, “Money is no problem.” The Board did not agree. Money is the problem.
  2. The lovey dovey relationship with the Microsofties has hit a rough patch. MSFT’s noises have been faint and now may become louder about AI chips, options, and innovations. Will these Microsoft bleats become more shrill as the ageing giant feels pain as it tries to make marketing hyperbole a reality. Let’s ask the Copilot, shall we?
  3. The Board has realized that the hyperbole has exceeded OpenAI’s technical ability to solve such problems as made up data (hallucinations), the resources to cope with the the looming legal storm clouds related to unlicensed use of some content (the Copyright Shield “promise”), fixing up the baked in bias of the system, and / or OpenAI ChatGPT’s vulnerability to nifty prompt engineering to override alleged “guardrails”.

What’s next?

My answer is, “Uncertainty.” Cue the Ray Charles’ hit with the lyric “Hit the road, Jack. Don’t you come back no more, no more, no more, no more. (I did not steal this song; I found it via Google on the Google YouTube. Honest.) I admit I did hear the tune playing in my head when I read the Guardian story.

Stephen E Arnold, November 17, 2023

x

x

x

x

x

x

The Power of Regulation: Muscles MSFT Meets a Strict School Marm

November 17, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I read “The EU Will Finally Free Windows Users from Bing.” The EU? That collection of fractious states which wrangle about irrelevant subjects; to wit, the antics of America’s techno-feudalists. Yep, that EU.

The “real news” write up reports:

Microsoft will soon let Windows 11 users in the European Economic Area (EEA) disable its Bing web search, remove Microsoft Edge, and even add custom web search providers — including Google if it’s willing to build one — into its Windows Search interface. All of these Windows 11 changes are part of key tweaks that Microsoft has to make to its operating system to comply with the European Commission’s Digital Markets Act, which comes into effect in March 2024

The article points out that the DMA includes a “slew” of other requirements. Please, do not confuse “slew” with “stew.” These are two different things.

image

The old fashioned high school teacher says to the high school super star, “I don’t care if you are an All-State football player, you will do exactly as I say. Do you understand?” The outsized scholar-athlete scowls and say, “Yes, Mrs. Ee-You. I will comply.” Thank you MSFT Copilot. You converted the large company into an image I had of its business practices with aplomb.

Will Microsoft remove Bing — sorry, Copilot — from its software and services offered in the EU? My immediate reaction is that the Redmond crowd will find a way to make the magical software available. For example, will such options as legalese and a check box, a new name, a for fee service with explicit disclaimers and permissions, and probably more GenZ ideas foreign to me do the job?

The techno weight lifter should not be underestimated. Those muscles were developed moving bundles of money, not dumb “belles.”

Stephen E Arnold, November 17, 2023

Smart Software for Cyber Security Mavens (Good and Bad Mavens)

November 17, 2023

green-dino_thumb_thumbThis essay is the work of a dumb humanoid. No smart software required.

One of my research team (who wishes to maintain a low profile) called my attention to the “Awesome GPTs (Agents) for Cybersecurity.” The list on GitHub says:

The "Awesome GPTs (Agents) Repo" represents an initial effort to compile a comprehensive list of GPT agents focused on cybersecurity (offensive and defensive), created by the community. Please note, this repository is a community-driven project and may not list all existing GPT agents in cybersecurity. Contributions are welcome – feel free to add your own creations!

image

Open source cyber security tools and smart software can be used by good actors to make people safe. The tools can be used by less good actors to create some interesting situations for cyber security professionals, the elderly, and clueless organizations. Thanks, Microsoft Bing. Does MSFT use these tools to keep people safe or unsafe?

When I viewed the list, it contained more than 30 items. Let me highlight three, and invite you to check out the other 30 at the link to the repository:

  1. The Threat Intel Bot. This is a specialized GPT for advanced persistent threat intelligence
  2. The Message Header Analyzer. This dissects email headers for “insights.”
  3. Hacker Art. The software generates hacker art and nifty profile pictures.

Several observations:

  • More tools and services will be forthcoming; thus, the list will grow
  • Bad actors and good actors will find software to help them accomplish their objectives.
  • A for fee bundle of these will be assembled and offered for sale, probably on eBay or Etsy. (Too bad fr0gger.)

Useful list!

Stephen E Arnold, November 17, 2023

xx

test

Google: Rock Solid Arguments or Fanciful Confections?

November 17, 2023

green-dino_thumb_thumbThis essay is the work of a dumb humanoid. No smart software required.

I read some “real” news from a “real” newspaper. My belief is that a “real journalist”, an editor, and probably some supervisory body reviewed the write up. Therefore, by golly, the article is objective, clear, and actual factual. What’s “What Google Argued to Defend Itself in Landmark Antitrust Trial” say?

image

“I say that my worthy opponent’s assertions are — ahem, harrumph — totally incorrect. I do, I say, I do offer that comment with the greatest respect. My competitors are intellectual giants compared to the regulators who struggle to use Google Maps on an iPhone,” opines a legal eagle who supports Google. Thanks, Microsoft Bing. You have the “chubby attorney” concept firmly in your digital grasp.

First, the write up says zero about the secrecy in which the case is wrapped. Second, it does not offer any comment about the amount the Google paid to be the default search engine other than offering the allegedly consumer-sensitive, routine, and completely logical fees Google paid. Hey, buying traffic is important, particularly for outfits accused of operating in a way that requires a US government action. Third, the support structure for the Google arguments is not evident. I could not discern the logical threat that linked the components presented in such lucid prose.

The pillars of the logical structure are:

  1. Appropriate payments for traffic; that is, the Google became the default search engine. Do users change defaults? Well, sure they do? If true, then why be the default in the first place. What are the choices? A Russian search engine, a Chinese search engine, a shadow of Google (Bing, I think), or a metasearch engine (little or no original indexing, just Vivisimo-inspired mash up results)? But pay the “appropriate” amount Google did.
  2. Google is not the only game in town. Nice terse statement of questionable accuracy. That’s my opinion which I articulated in the three monographs I wrote about Google.
  3. Google fosters competition. Okay, it sure does. Look at the many choices one has: Swisscows.com, Qwant.com, and the estimable Mojeek, among others.
  4. Google spends lots of money on helping people research to make “its product great.”
  5. Google’s innovations have helped people around the world?
  6. Google’s actions have been anticompetitive, but not too anticompetitive.

Well, I believe each of these assertions. Would a high school debater buy into the arguments? I know for a fact that my debate partner and I would not.

Stephen E Arnold, November 17, 2023

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta