Amazon Customer Service: Let Many Flowers Bloom and Die on the Vine

November 29, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Amazon has been outputting artificial intelligence “assertions” at a furious pace. What’s clear is that Amazon is “into” the volume and variety business in my opinion. The logic of offering multiple “works in progress” and getting them to work reasonably well is going to have three characteristics: The first is that deploying and operating different smart software systems is going to be expensive. The second is that tuning and maintaining high levels of accuracy in the outputs will be expensive. The third is that supporting the users, partners, customers, and integrators is going to be expensive. If we use a bit of freshman in high school algebra, the common factor is expensive. Amazon’s remarkable assertion that no one wants to bet a business on just one model strikes me as a bit out of step with the world in which bean counters scuttle and scurry in green eyeshades and sleeve protectors. (See. I am a dinobaby. Sleeve protectors. I bet none of the OpenAI type outfits have accountants who use these fashion accessories!)

Let’s focus on just one facet of the expensive burdens I touched upon above— customer service. Navigate to the remarkable and stunningly uncritical write up called “How to Reach Amazon Customer Service: A Complete Guide.” The write up is an earthworm list of the “options” Amazon provides. As Amazon was announcing its new new big big things, I was trying to figure out why an order for an $18 product was rejected. The item in question was one part of a multipart order. The other, more costly items were approved and billed to my Amazon credit card.

image

Thanks MSFT Copilot. You do a nice broken bulldozer or at least a good enough one.

But the dog treats?

I systematically worked through the Amazon customer service options. As a Prime customer, I assumed one of them would work. Here’s my report card:

  • Amazon’s automated help. A loop. See Help pages which suggested I navigate too the customer service page. Cute. A first year comp sci student’s programming error. A loop right out of the box. Nifty.
  • The customer service page. Well, that page sent me to Help and Help sent me to the automation loop. Cool Zero for two.
  • Access through the Amazon app. Nope. I don’t install “apps” on my computing devices unless I have zero choice. (Yes, I am thinking about Apple and Google.) Too bad Amazon, I reject your app the way I reject QR codes used by restaurants. (Do these hash slingers know that QR codes are a fave of some bad actors?)
  • Live chat with Amazon customer service was not live. It was a bot. The suggestion? Get back in the loop. Maybe the chat staff was at the Amazon AI announcement or just severely overstaffed or simply did not care. Another loser.
  • Request a call from Amazon customer service. Yeah, I got to that after I call Amazon customer service. Another loser.

I repeated the “call Amazon customer service” twice and I finally worked through the automated system and got a person who barely spoke English. I explained the problem. One product rejected because my Amazon credit card was rejected. I learned that this particular customer service expert did not understand how that could have happened. Yeah, great work.

How did I resolve the rejected credit card. I called the Chase Bank customer service number. I told a person my card was manipulated and I suspected fraud. I was escalated to someone who understood the word “fr4aud.” After about five minutes of “’Will you please hold”, the Chase person told me, “The problem is at Amazon, not your card and not Chase.”

What was the fix? Chase said, “Cancel the order.” I did and went to another vendor.

Now what’s that experience suggest about Amazon’s ability (willingness) to provide effective, efficient customer support to users of its purported multiple large language models, AI systems, and assorted marketing baloney output during Amazon’s “we are into AI” week?

My answer? The Bezos bulldozer has an engine belching black smoke, making a lot of noise because the muffler has a hole in it, and the thumpity thump of the engine reveals that something is out of tune.

Yeah, AI and customer support. Just one of the “expensive” things Amazon may not be able to deliver. The troubling thing is that Amazon’s AI may have been powering the multiple customer support systems. Yikes.

Stephen E Arnold, November 29, 2023

India Might Not Buy the User-Is-Responsible Argument

November 29, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

India’s elected officials seem to be agitated about deep fakes. No, it is not the disclosure that a company in Spain is collecting $10,000 a month or more from a fake influencer named Aitana López. (Some in India may be following the deeply faked bimbo, but I would assert that not too many elected officials will admit to their interest in the digital dream boat.)

US News & World Report recycled a Reuters (the trust outfit) story “India Warns Facebook, YouTube to Enforce Ruyles to Deter Deepfakes — Sources” and asserted:

India’s government on Friday warned social media firms including Facebook and YouTube to repeatedly remind users that local laws prohibit them from posting deepfakes and content that spreads obscenity or misinformation

11 24 reprimanded

“I know you and the rest of the science club are causing problems with our school announcement system. You have to stop it, or I will not recommend you or any science club member for the National Honor Society.” The young wizard says, “I am very, very sorry. Neither I nor my friends will play rock and roll music during the morning announcements. I promise.” Thanks, MidJourney. Not great but at least you produced an image which is more than I can say for the MSFT Copilot Bing thing.

What’s notable is that the government of India is not focusing on the user of deep fake technology. India has US companies in its headlights. The news story continues:

India’s IT ministry said in a press statement all platforms had agreed to align their content guidelines with government rules.

Amazing. The US techno-feudalists are rolling over. I am someone who wonders, “Will these US companies bend a knee to India’s government?” I have zero inside information about either India or the US techno-feudalists, but I have a recollection that US companies:

  1. Do what they want to do and then go to court. If they win, they don’t change. If they lose, they pay the fine and they do some fancy dancing.
  2. Go to a meeting and output vague assurances prefaced by “Thank you for that question.” The companies may do a quick paso double and continue with business pretty much as usual
  3. Just comply. As Canada has learned, Facebook’s response to the Canadian news edict was simple: No news for Canada. To make the situation more annoying to a real government, other techno-feudalists hopped on Facebook’s better idea.
  4. Ignore the edict. If summoned to a meeting or hit with a legal notice, companies will respond with flights of legal eagles with some simple messages; for example, no more support for your law enforcement professionals or your intelligence professionals. (This is a hypothetical example only, so don’t develop the shingles, please.)

Net net: Techno-feudalists have to decide: Roll over, ignore, or go to “war.”

Stephen E Arnold, November 29, 2023

Maybe the OpenAI Chaos Ended Up as Grand Slam Marketing?

November 28, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Yep, Q Star. The next Big Thing. “About That OpenAI Breakthrough” explains

OpenAI could in fact have a breakthrough that fundamentally changes the world. But “breakthroughs” rarely turn to be general to live up to initial rosy expectations. Often advances work in some contexts, not otherwise.

I agree, but I have a slightly different view of the matter. OpenAI’s chaotic management skills ended up as accidental great marketing. During the dust up and dust settlement, where were the other Big Dogs of the techno-feudal world? If you said, who? you are on the same page with me. OpenAI burned itself into the minds of those who sort of care about AI and the end of the world Terminator style.

image

In companies and organizations with “do gooder” tendencies, the marketing messages can be interpreted by some as a scientific fact. Nope. Thanks, MSFT Copilot. Are you infringing and expecting me to take the fall?

First, shotgun marriages can work out here in rural Kentucky. But more often than not, these unions become the seeds of Hatfield and McCoy-type Thanksgivings. “Grandpa, don’t shoot the turkey with birdshot. Granny broke a tooth last year.” Translating from Kentucky argot: Ideological divides produce craziness. The OpenAI mini-series is in its first season and there is more to come from the wacky innovators.

Second, any publicity is good publicity in Sillycon Valley. Who has given a thought to Google’s smart software? How did Microsoft’s stock perform during the five day mini-series? What is the new Board of Directors going to do to manage the bucking broncos of breakthroughs? Talk about dominating the “conversation.” Hats off to the fun crowd at OpenAI. Hey, Google, are you there?

Third, how is that regulation of smart software coming along? I think one unit of the US government is making noises about the biggest large language model ever. The EU folks continue to discuss, a skill essential to representing the interests of the group. Countries like China are chugging along, happily downloading code from open source repositories. So exactly what’s changed?

Net net: The OpenAI has been a click champ. Good, bad, or indifferent, other AI outfits have some marketing to do in the wake of the blockbuster “Sam AI-Man: The Next Bigger Thing.” One way or another, Sam AI-Man dominates headlines, right Zuck, right Sundar?

Stephen  E Arnold, November 28, 2023

Governments Tip Toe As OpenAI Sprints: A Story of the Turtles and the Rabbits

November 27, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Reuters has reported that a pride of lion-hearted countries have crafted “joint guidelines” for systems with artificial intelligence. I am not exactly sure what “artificial intelligence” means, but I have confidence that a group of countries, officials, advisor, and consultants do.

The main point of the news story “US, Britain, Other Countries Ink Agreement to Make AI Secure by Design” is that someone in these countries knows what “secure by design” means. You may not have noticed that cyber breaches seem to be chugging right along. Maine managed to lose control of most of its residents’ personally identifiable information. I won’t mention issues associated with Progress Software, Microsoft systems, and LY Corp and its messaging app with a mere 400,000 users.

image

The turtle started but the rabbit reacted. Now which AI enthusiast will win the race down the corridor between supercomputers powering smart software? Thanks, MSFT Copilot. It took several tries, but you delivered a good enough image.

The Reuters’ story notes with the sincerity of an outfit focused on trust:

The agreement is the latest in a series of initiatives – few of which carry teeth – by governments around the world to shape the development of AI, whose weight is increasingly being felt in industry and society at large.

Yep, “teeth.”

At the same time, Sam AI-Man was moving forward with such mouth-watering initiatives as the AI app store and discussions to create AI-centric hardware. “I Guess We’ll Just Have to Trust This Guy, Huh?” asserts:

But it is clear who won (Altman) and which ideological vision (regular capitalism, instead of some earthy, restrained ideal of ethical capitalism) will carry the day. If Altman’s camp is right, then the makers of ChatGPT will innovate more and more until they’ve brought to light A.I. innovations we haven’t thought of yet.

As the signatories to the agreement without “teeth” and Sam AI-Man were doing their respective “thing,” I noted the AP story titled “Pentagon’s AI Initiatives Accelerate Hard Decisions on Lethal Autonomous Weapons.” That write up reported:

… the Pentagon is intent on fielding multiple thousands of relatively inexpensive, expendable AI-enabled autonomous vehicles by 2026 to keep pace with China.

To deal with the AI challenge, the AP story includes this paragraph:

The Pentagon’s portfolio boasts more than 800 AI-related unclassified projects, much still in testing. Typically, machine-learning and neural networks are helping humans gain insights and create efficiencies.

Will the signatories to the “secure by design” agreement act like tortoises or like zippy hares? I know which beastie I would bet on. Will military entities back the slow or the fast AI faction? I know upon which I would wager fifty cents.

Stephen E Arnold, November 27, 2023

Predicting the Weather: Another Stuffed Turkey from Google DeepMind?

November 27, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

By or design, the adolescents at OpenAI have dominated headlines for the pre-turkey, the turkey, and the post-turkey celebrations. In the midst of this surge in poohbah outputs, Xhitter xheets, and podcast posts, non-OpenAI news has been struggling for a toehold.

image

An important AI announcement from Google DeepMind stuns a small crowd. Were the attendees interested in predicting the weather or getting a free umbrella? Thank, MSFT Copilot. Another good enough art work whose alleged copyright violations you want me to determine. How exactly am I to accomplish that? Use, Google Bard?

What is another AI company to do?

A partial answer appears in “DeepMind AI Can Beat the Best Weather Forecasts. But There Is a Catch”. This is an article in the esteemed and rarely spoofed Nature Magazine. None of that Techmeme dominating blue link stuff. None of the influential technology reporters asserting, “I called it. I called it.” None of the eye wateringly dorky observations that OpenAI’s organizational structure was a problem. None of the “Satya Nadella learned about the ouster at the same time we did.” Nope. Nope. Nope.

What Nature provided is good, old-fashioned content marketing. The write up points out that DeepMind says that it has once again leapfrogged mere AI mortals. Like the quantum supremacy assertion, the Google can predict the weather. (My great grandmother made the same statement about The Farmer’s Almanac. She believed it. May she rest in peace.)

The estimable magazine reported in the midst of the OpenAI news making turkeyfest said:

To make a forecast, it uses real meteorological readings, taken from more than a million points around the planet at two given moments in time six hours apart, and predicts the weather six hours ahead. Those predictions can then be used as the inputs for another round, forecasting a further six hours into the future…. They [Googley DeepMind experts] say it beat the ECMWF’s “gold-standard” high-resolution forecast (HRES) by giving more accurate predictions on more than 90 per cent of tested data points. At some altitudes, this accuracy rose as high as 99.7 per cent.

No more ruined picnics. No weddings with bridesmaids’ shoes covered in mud. No more visibly weeping mothers because everyone is wet.

But Nature, to the disappointment of some PR professionals presents an alternative viewpoint. What a bummer after all those meetings and presentations:

“You can have the best forecast model in the world, but if the public don’t trust you, and don’t act, then what’s the point? [A statement attributed to Ian Renfrew at the University of East Anglia]

Several thoughts are in order:

  1. Didn’t IBM make a big deal about its super duper weather capabilities. It bought the Weather Channel too. But when the weather and customers got soaked, I think IBM folded its umbrella. Will Google have to emulate IBM’s behavior. I mean “the weather.” (Note: The owner of the IBM Weather Company is an outfit once alleged to have owned or been involved with the NSO Group.)
  2. Google appears to have convinced Nature to announce the quantum supremacy type breakthrough only to find that a professor from someplace called East Anglia did not purchase the rubber boots from the Google online store.
  3. The current edition of The Old Farmer’s Almanac is about US$9.00 on Amazon. That predictive marvel was endorsed by Gussie Arnold, born about 1835. We are not sure because my father’s records of the Arnold family were soaked by sudden thunderstorm.

Just keep in mind that Google’s system can predict the weather 10 days ahead. Another quantum PR moment from the Google which was drowned out in the OpenAI tsunami.

Stephen E Arnold, November 27, 2023

Microsoft, the Techno-Lord: Avoid My Galloping Steed, Please

November 27, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

The Merriam-Webster.com online site defines “responsibility” this way:

re·?spon·?si·?bil·?I·?ty

1 : the quality or state of being responsible: such as
: moral, legal, or mental accountability
: RELIABILITY, TRUSTWORTHINESS
: something for which one is responsible

The online sector has a clever spin on responsibility; that is, in my opinion, the companies have none. Google wants people who use its online tools and post content created with those tools to make sure that what the Google system outputs does not violate any applicable rules, regulations, or laws.

image

In a traditional fox hunt, the hunters had the “right” to pursue the animal. If a farmer’s daughter were in the way, it was the farmer’s responsibility to keep the silly girl out of the horse’s path. That will teach them to respect their betters I assume. Thanks, MSFT Copilot. I know you would not put me in a legal jeopardy, would you? Now what are the laws pertaining to copyright for a cartoon in Armenia? Darn, I have to know that, don’t I.

Such a crafty way of  defining itself as the mere creator of software machines has inspired Microsoft to follow a similar path. The idea is that anyone using Microsoft products, solutions, and services is “responsible” to comply with applicable rules, regulations, and laws.

Tidy. Logical. Complete. Just like a nifty algebra identity.

Microsoft Wants YOU to Be Sued for Copyright Infringement, Washes Its Hands of AI Copyright Misuse and Says Users Should Be Liable for Copyright Infringement” explains:

Microsoft believes they have no liability if an AI, like Copilot, is used to infringe on copyrighted material.

The write up includes this passage:

So this all comes down to, according to Microsoft, that it is providing a tool, and it is up to users to use that tool within the law. Microsoft says that it is taking steps to prevent the infringement of copyright by Copilot and its other AI products, however, Microsoft doesn’t believe it should be held legally responsible for the actions of end users.

The write up (with no Jimmy Kimmel spin) includes this statement, allegedly from someone at Microsoft:

Microsoft is willing to work with artists, authors, and other content creators to understand concerns and explore possible solutions. We have adopted and will continue to adopt various tools, policies, and filters designed to mitigate the risk of infringing outputs, often in direct response to the feedback of creators. This impact may be independent of whether copyrighted works were used to train a model, or the outputs are similar to existing works. We are also open to exploring ways to support the creative community to ensure that the arts remain vibrant in the future.

From my drafty office in rural Kentucky, the refusal to accept responsibility for its business actions, its products, its policies to push tools and services on users, and the outputs of its cloudy system is quite clever. Exactly how will a user of products pushed at users like Edge and its smart features prevent a user from acquiring from a smart Microsoft system something that violates an applicable rule, regulation, or law?

But legal and business cleverness is the norm for the techno-feudalists. Let the surfs deal with the body of the child killed when the barons chase a fox through a small leasehold. I can hear the brave royals saying, “It’s your fault. Your daughter was in the way. No, I don’t care that she was using the free Microsoft training materials to learn how to use our smart software.”

Yep, responsible. The death of the hypothetical child frees up another space in the training course.

Stephen E Arnold, November 27, 2023

Speeding Up and Simplifying Deep Fake Production

November 24, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Remember the good old days when creating a deep fake required having multiple photographs, maybe a video clip, and minutes of audio? Forget those requirements. To whip up a deep fake, one needs only a short audio clip and a single picture of the person.

11 18 racing cars 2

The pace of innovation in deep face production is speeding along. Bad actors will find it easier than ever to produce interesting videos for vulnerable grandparents worldwide. Thanks, MidJourney. It was a struggle but your produced a race scene that is good enough, the modern benchmark for excellence.

Researchers at Nanyang Technological University has blasted through the old-school requirements. The teams software can generate realistic videos. These can show facial expressions and head movements. The system is called DIRFA, a tasty acronym for Diverse yet Realistic Facial Animations. One notable achievement of the researchers is that the video is produced in 3D.

The report “Realistic Talking Faces Created from Only and Audio Clip and a Person’s Photo” includes more details about the system and links to demonstration videos. If the story is not available, you may be able to see the video on YouTube at this link.

Stephen E Arnold, November 24, 2023

Facial Recognition: A Bit of Bias Perhaps?

November 24, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

It’s a running gag in the tech industry that AI algorithms and related advancements are “racist.” Motion sensors can’t recognize dark pigmented skin. Photo recognition software misidentifies black and other ethnicities as primates. AI-trained algorithms are also biased against ethnic minorities and women in the financial, business, and other industries. AI is “racist” because it’s trained on data sets heavy in white and male information.

Ars Technica shares another story about biased AI: “People Think White AI-Generated Faces Are More Real Than Actual Photos, Study Says.” The journal of Psychological Science published a peer reviewed study, “AI Hyperrealism: Why AI Faces Are Perceived As More Real Than Human Ones.” The study discovered that faces created from three-year old AI technology were found to be more real than real ones. Predominately, AI-generate faces of white people were perceived as the most realistic.

The study surveyed 124 white adults who were shown a mixture of 100 AI-generated images and 100 real ones. They identified 66% of the AI images as human and 51% of the real faces were identified as real. Real and AI images of ethnic minorities with high amounts of melanin were viewed as real 51%. The study also discovered that participants who made the most mistakes were also the most confident, a clear indicator of the Dunning-Kruger effect.

The researchers conducted a second study with 610 participants and learned:

“The analysis of participants’ responses suggested that factors like greater proportionality, familiarity, and less memorability led to the mistaken belief that AI faces were human. Basically, the researchers suggest that the attractiveness and "averageness" of AI-generated faces made them seem more real to the study participants, while the large variety of proportions in actual faces seemed unreal.

Interestingly, while humans struggled to differentiate between real and AI-generated faces, the researchers developed a machine-learning system capable of detecting the correct answer 94 percent of the time.”

The study could be swung in the typical “racist” direction that AI will perpetuate social biases. The answer is simple and should be invested: create better data sets to train AI algorithms.

Whitney Grace, November 24, 2023

Poli Sci and AI: Smart Software Boosts Bad Actors (No Kidding?)

November 22, 2023

green-dino_thumb_thumbThis essay is the work of a dumb humanoid. No smart software required.

Smart software (AI, machine learning, et al) has sparked awareness in some political scientists. Until I read “Can Chatbots Help You Build a Bioweapon?” — I thought political scientists were still pondering Frederick William, Elector of Brandenburg’s social policies or Cambodian law in the 11th century. I was incorrect. Modern poli sci influenced wonks are starting to wrestle with the immense potential of smart software for bad actors. I think this dispersal of the cloud of unknowing I perceived among similar academic group when I entered a third-rate university in 1962 is a step forward. Ah, progress!

image

“Did you hear that the Senate Committee used my testimony about artificial intelligence in their draft regulations for chatbot rules and regulations?” says the recently admitted elected official. The inmates at the prison facility laugh at the incongruity of the situation. Thanks, Microsoft Bing, you do understand the ways of white collar influence peddling, don’t you?

The write up points out:

As policymakers consider the United States’ broader biosecurity and biotechnology goals, it will be important to understand that scientific knowledge is already readily accessible with or without a chatbot.

The statement is indeed accurate. Outside the esteemed halls of foreign policy power, STM (scientific, technical, and medical) information is abundant. Some of the data are online and reasonably easy to find with such advanced tools as Yandex.com (a Russian centric Web search system) or the more useful Chemical Abstracts data.

The write up’s revelations continue:

Consider the fact that high school biology students, congressional staffers, and middle-school summer campers already have hands-on experience genetically engineering bacteria. A budding scientist can use the internet to find all-encompassing resources.

Yes, more intellectual sunlight in the poli sci journal of record!

Let me offer one more example of ground breaking insight:

In other words, a chatbot that lowers the information barrier should be seen as more like helping a user step over a curb than helping one scale an otherwise unsurmountable wall. Even so, it’s reasonable to worry that this extra help might make the difference for some malicious actors. What’s more, the simple perception that a chatbot can act as a biological assistant may be enough to attract and engage new actors, regardless of how widespread the information was to begin with.

Is there a step government deciders should take? Of course. It is the step that US high technology companies have been begging bureaucrats to take. Government should spell out rules for a morphing, little understood, and essentially uncontrollable suite of systems and methods.

There is nothing like regulating the present and future. Poli sci professionals believe it is possible to repaint the weird red tail on the Boeing F 7A aircraft while the jet is flying around. Trivial?

Here’s the recommendation which I found interesting:

Overemphasizing information security at the expense of innovation and economic advancement could have the unforeseen harmful side effect of derailing those efforts and their widespread benefits. Future biosecurity policy should balance the need for broad dissemination of science with guardrails against misuse, recognizing that people can gain scientific knowledge from high school classes and YouTube—not just from ChatGPT.

My take on this modest proposal is:

  1. Guard rails allow companies to pursue legal remedies as those companies do exactly what they want and when they want. Isn’t that why the Google “public” trial underway is essentially “secret”?
  2. Bad actors loves open source tools. Unencumbered by bureaucracies, these folks can move quickly. In effect the mice are equipped with jet packs.
  3. Job matching services allow a bad actor in Greece or Hong Kong to identify and hire contract workers who may have highly specialized AI skills obtained doing their day jobs. The idea is that for a bargain price expertise is available to help smart software produce some AI infused surprises.
  4. Recycling the party line of a handful of high profile AI companies is what makes policy.

With poli sci professional becoming aware of smart software, a better world will result. Why fret about livestock ownership in the glory days of what is now Cambodia? The AI stuff is here and now, waiting for the policy guidance which is sure to come even though the draft guidelines have been crafted by US AI companies?

Stephen E Arnold, November 22, 2023

Turmoil in AI Land: Uncertainty R Us

November 21, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

It is now Tuesday, November 21, 2023. I learned this morning on the “Pivot” podcast that one of the co-hosts is the “best technology reporter.” I read a number of opinions about the high school science club approach to managing a multi-billion dollar alleged valued at lots of money last Friday, November 17, 2023, and today valued at much less money. I read some of the numerous “real news” stories on Hacker News, Techmeme, and Xitter, and learned:

  1. Gee, it was a mistake
  2. Sam AI-Man is working at Microsoft
  3. Sam AI-Man is not working at Microsoft
  4. Microsoft is ecstatic that opportunities are available
  5. Ilya Sutskever will become a blue-chip consultant specializing in Board-level governance
  6. OpenAI is open because it is business as usual in Sillycon Valley.

image

The AI ringmaster has issued an instruction or prompt to the smart software. The smart software does not obey. What’s happening is that not only are inputs not converted to the desired actions, the entire circus audience is not sure which is more entertaining, the software or the manager. Thanks, Microsoft Copilot. I gave up and used one of the good enough images.

“Firing Sam Altman Hasn’t Worked Out for OpenAI’s Board” reports:

Whether Altman ultimately stays at Microsoft or comes back to OpenAI, he’ll be more powerful than he was last week. And if he wants to rapidly develop and commercialize powerful AI models, nobody will be in a position to stop him. Remarkably, one of the 500 employees who signed Monday’s OpenAI employee letter is Ilya Sutskever, who has had a profound change of heart since he voted to oust Altman on Friday.

Okay, maybe Ilya Sutskever will not become a blue chip consultant. That’s okay, just mercurial.

Several observations:

  1. Smart software causes bright people to behave in sophomoric ways. I have argued for many years that many of the techno-feudalistic outfits are more like high school science clubs than run-of-the-mill high school sophomores. Intelligence coupled with a poorly developed judgment module causes some spectacular management actions.
  2. Poor Google must be uncomfortable as its struggles on the tenterhooks which have snagged its corporate body. Is Microsoft going to going to be the Big Dog in smart software? Is Sam AI-Man going to do something new to make life for Googzilla more uncomfortable than it already is? Is Google now faced with a crisis about which its flocks of legal eagles, its massive content marketing machine, and its tools for shaping content cannot do much to seize the narrative.
  3. Developers who have embraced the idea of OpenAI as the best partner in the world have to consider that their efforts may be for naught? Where do these wizards turn? To Microsoft and the Softie ethos? To the Zuck and his approach? To Google and its reputation for terminating services like snipers? To the French outfit with offices near some very good restaurants. (That doesn’t sound half bad, does it?)

I am not sure if Act I has ended or if the entire play has ended. After a short intermission, there will be more of something.

Stephen E Arnold, November 21, 2023

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta