AI and Ethical Concerns: Sure, When “Ethics” Means Money

June 11, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

It seems workers continue to flee OpenAI over ethical concerns. The Byte reports, “Another OpenAI Researcher Quits, Issuing Cryptic Warning.” Understandably unwilling to disclose details, policy researcher Gretchen Kreuger announced her resignation on X. She did express a few of her concerns in broad strokes:

“We need to do more to improve foundational things, like decision-making processes; accountability; transparency; documentation; policy enforcement; the care with which we use our own technology; and mitigations for impacts on inequality, rights, and the environment.”

Kreuger emphasized these important issues not only affect communities now but also influence who controls the direction of pervasive AI systems in the future. Right now, that control is in the hands of the tech bros running AI firms. Writer Maggie Harrison Dupré notes Krueger’s departure comes as OpenAI is dealing with a couple of scandals. Other high-profile resignations have also occurred in recent months. We are reminded:

“[Recent] departures include that of Ilya Sutskever, who served as OpenAI’s chief scientist, and Jan Leike, a top researcher on the company’s now-dismantled ’Superalignment’ safety team — which, in short, was the division effectively in charge of ensuring that a still-theoretical human-level AI wouldn’t go rogue and kill us all. Or something like that. Sutskever was also a leader within the Superalignment division. And to that end, it feels very notable that all three of these now-ex-OpenAI workers were those who worked on safety and policy initiatives. It’s almost as if, for some reason, they felt as though they were unable to successfully do their job in ensuring the safety and security of OpenAI’s products — part of which, of course, would reasonably include creating pathways for holding leadership accountable for their choices.”

Yes, most of us would find that reasonable. For members of that leadership, though, it seems escaping accountability is a top priority.

Cynthia Murrell, June 11, 2024

AI May Not Be Magic: The Salesforce Signal

June 10, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

Salesforce has been a steady outfit. However, the company suffered a revenue miss, its first in about a quarter century. The news reports cited broad economic factors like “macro headwinds.” Salesforce, according to the firm’s chief people officer, the company has been experimenting with AI for “over a decade.” But the magic of AI was not able to ameliorate the company’s dip or add some chrome trim to its  revenue guidance.

image

John Milton’s god character from Paradise Lost watches the antics of super-sophisticated artificial intelligence algorithms. This character quickly realizes that zeros and ones crafted by humans and enhanced by smart machines is definitely not omniscient, omnipresent, and omnipotent character who knows everything before it happens no matter what the PR firms or company spokesperson asserts. Thanks, MSFT Copilot. Good enough.

Piecing together fragments of information, it appears that AI has added to the company’s administrative friction. In a Fortune interview, recycled for MSN.com, consider these administrative process examples:

  • The company has deployed 50 AI tools.
  • Salesforce has an AI governance council.
  • There is an Office of Ethical and Humane Use, started in 2019.
  • Salesforce uses surveys to supplement its “robust listening strategies.”
  • There are phone calls and meetings.

Some specific uses of AI appear to address inherent design constraints in Salesforce software; for example, AI has:

saved employees 50,000 hours within one business quarter, and the bot answered nearly 370,000 employee questions, according to the company. Merging into Project Basecamp, the company’s project management platform, has resolved 88,000 worker requests, speeding up issue resolution from an average of 48 hours to just 30 minutes.

What’s the payoff to the bottom line? That information is scant. What we know is that Salesforce may not be benefiting from the additional AI investment or the friction AI’s bureaucratic processes imposes on the company.

What’s this mean for those who predict that AI will change everything? I continue to think about the two ends of the spectrum: Go fast and break things crowd and the stop AI contingent.

First, the type of AI which is the one that does high school essay writing is easy to understand. These systems work as long as the subject matter clumps into piles of factoids which limit the craziness of the algorithms’ outputs. The topic “How to make a taco” is nailed down. The topic “How to decrypt Telegram’s encryption system” is not. Big brains can explain why the taco question is relatively hallucination free but not why the Telegram question generates useless drivel. I have, therefore, concluded, “Limited, narrow domain questions are okay for AI.”

Second, the current systems are presented as super wonderful. An example is the steady flow of PR about Google DeepMind’s contributions to biological science. Yet Google’s search system generates baloney. I think the different is that whacking away at proteins is a repetitive combinatorial problem. Calling the methods AI is similar to describing Daylight Chemical Information Systems a manifestation of the Oracle at Delphi is hogwash. PR erases important differences in critical lines of research. Does Google DeepMind feel shame? Let’s ask IBM Watson. That will be helpful. PR has a role; it is not AI.

Third, the desire for a silver bullet is deep-seated in many Peter Principle managers. These “leaders” of “leadership teams” don’t know what to do. Managing becomes figuring out risks. AI has legs, so let’s give that pony a chance to win the cart race. But pony cart races are trivial. The real races require winning three competitions. Few horses pull of that trick. I watch in wonder the launch, retreat, PR explanation, and next launch of some AI outfits. The focus seems to be on getting $20 per month. Degrading the service. Asking for more money. Then repeat.

The lack of AI innovation is becoming obvious. From the starter’s gun cracking in time with Microsoft’s AI announcement in January 2023 how much progress has been made?

We have the Salesforce financial report. We have the management craziness at OpenAI. We have Microsoft investing in or partnering with a number of technology outfits, including one in Paris. We have Google just doddering and fumbling. We have lawsuits. We have craziness like Adobe’s “owning” any image created with its software. We have start ups which bandy about the term “AI” like a shuttlecock at a high school in India badminton league. We have so many LinkedIn AI experts, I marvel that no one pins these baloney artists to a piece of white bread. We have the Dutch police emphasizing home-grown AI which helped make sense of the ANOM phone stings when the procedures are part of most policeware systems. Statistics, yes. AI, no. Clustering, yes. AI, no. Metadata assignment, yes. AI, no. The ANOM operation took place about 2017 to its shut down four years later. AI? Nope.

What does the lack of financial payoff and revenue generating AI solutions tell me? My answer to this question is:

  1. The cost of just using and letting prospects use an AI system are high. Due to the lack of a Triple Crown contender, no company has the horse or can afford the costs of getting the nag ready to race and keeping the animal from keeling over dead.
  2. The tangible results are tough to express. Despite the talk about reducing the costs of customer service, the cost of the AI system and the need to have humans ride herd on what the crazed cattle-like algorithms yield is not evident to me. The Salesforce experience is that AI cannot fix or make the Slack system generate oodles of cost savings or revenues from new, happy customers.
  3. The AI systems, particularly the services promoted via Product Hunt, are impossible for me to differentiate. Some do images, but the functions are similar. Some AI system do text things. Okay. But what’s new? Money is being spent to produce endless variations and me-too services. Fun for some. But boring and a waste of time to a dinobaby like me.

Net net: With economic problems growing in numerous sectors, those with money or a belief that garlic will kill Count Vampire, Baron of Revenue Loss are in for a surprise. Sorry. No software equivalent to Milton’s eternal, all-knowing, omnipotent God. I won’t tell the PR people. That Salesforce signal is meaningful.

Stephen E Arnold, June 10, 2024

Google and Microsoft: The Twinning Is Evident

June 10, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

Google and Microsoft have some interesting similarities. Both companies wish they could emulate one another’s most successful products. Microsoft wants search and advertising revenue. Google wants a chokehold on the corporate market for software and services. The senior executives have similar high school academic training. Both companies have oodles of legal processes with more on the horizo9n. Both companies are terminating with extreme prejudice employees. Both companies seem to have some trust issues. You get the idea.

image

Some neural malfunctions occur when one get too big and enjoys the finer things in life like not working on management tasks with diligence. Thanks, MSFT Copilot. Good enough

Google and Microsoft are essentially morphing into mirrors of one another. Is that a positive? From an MBA / bean counter point of view, absolutely. There are some disadvantages, but they are minor ones; for example, interesting quasi-monopoly pricing options, sucking the air from the room for certain types of start ups, and having the power of a couple of nation-states. What could go wrong? (Just check out everyday life. Clues are abundant.)

How about management methods which do not work very well. I want to cite two examples.

Google is scaling back its AI search plans after the summary feature told people to eat glue. How do I, recently dubbed scary grandpa cyber by an officer at the TechnoSecurity & Digital Forensics Conference in Wilmington, North Carolina, last week? The answer is that I read “Google Is Scaling Back Its AI Search Plans after the Summary Feature Told People to Eat Glue.” This is a good example of the minimum viable product not be minimal enough and certainly not viable. The write up says:

Reid [a Google wizard] wrote that the company already had systems in place to not show AI-generated news or health-related results. She said harmful results that encouraged people to smoke while pregnant or leave their dogs in cars were “faked screenshots.” The list of changes is the latest example of the Big Tech giant launching an AI product and circling back with restrictions after things get messy.

What a remarkable tactic. Blame the “users” and reducing the exposure of the online ad giant’s technological prowess. I think these two tactics illustrate the growing gulf between “leadership” and the poorly managed lower level geniuses who toil at Googzilla’s side.

I noted a weird parallel with Microsoft illustrating a similar disconnect between the Microsoft’s carpetland dwellers and those working in the weird disconnected buildings on the Campus. This disaster of a minimum viable product or MVP was rolled out with much fanfare at one of Microsoft’s many, hard-to-differentiate conferences. The idea was one I heard about decades ago. The individual with whom I associate the idea once worked at Bellcore (one of the spin offs of Bell Labs after Judge Green created the telecommunications wonderland we enjoy today. The idea is a surveillance dream come true — at least for law enforcement and intelligence professionals. MSFT software captures images of a users screen, converts the bitmap to text, and helpfully makes it searchable. The brilliant Softie allegedly suggested in “When Asked about Windows Recall Privacy Concerns, Microsoft Researcher Gives Non-Answer

Microsoft’s Recall feature is being universally slammed for the privacy implications that come from screenshotting everything you do on a computer. However, at least one person seems to think the concerns are overblown. Unsurprisingly, it’s Microsoft Research’s chief scientist, who didn’t really give an answer when asked about Recall’s negative points.

Then what did a senior super manager do? Answer: Back track like crazy. Here’s the passage:

Even before making Recall available to customers, we have heard a clear signal that we can make it easier for people to choose to enable Recall on their Copilot+ PC and improve privacy and security safeguards. With that in mind we are announcing updates that will go into effect before Recall (preview) ships to customers on June 18.

The decision could have been made by a member of the Google leadership team. Heck, may the two companies’ senior leadership are on a mystical brain wave and think the same thoughts. Which is the evil twin? I will leave that to you to ponder.

Several observations are warranted:

  • For large, world-affecting companies, senior managers are simply out of touch with [a] their product development teams and [b] their “users.”
  • The outfits may be Wall Street darlings, but are their other considerations to weigh?The companies have been sufficiently large their communication neurons are no longer reliable. The messages they emit are double speak at best and PR speak at their worst.
  • The management controls are not working. One can delegate when one knows those in other parts of the organization make good decisions. What’s evident is that a lack of control, commitment to on point research, and good judgment illustrate a breakdown of the nervous system of these companies.

Net net: What’s ahead? More of the same dysfunction perhaps?

Stephen E Arnold, June 14, 2024

Now Teachers Can Outsource Grading to AI

June 10, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

In a prime example of doublespeak, the “No Child Left Behind” act of 2002 ushered in today’s teach-to-the-test school environment. Once upon a time, teachers could follow student interest deeper into subject, explore topics tangential to the curriculum, and encourage children’s creativity. Now it seems if it won’t be on the test, there is no time for it. Never mind evidence that standardized tests do not even accurately measure learning. Or the psychological toll they take on students. But education degradation is about to get worse.

Get ready for the next level in impersonal instruction. Graded.Pro is “AI Grading and Marking for Teachers and Educators.” Now teachers can hand the task of evaluating every classroom assignment off to AI. On the Graded.Pro website, one can view explanatory videos and see examples of AI-graded assignments. Math, science, history, English, even art. The test maker inputs the criteria for correct responses and the AI interprets how well answers adhere to those descriptions. This means students only get credit for that which an AI can measure. Sure, there is an opportunity for teachers to review the software’s decisions. And some teachers will do so closely. Others will merely glance at the results. Most will fall somewhere in between.

Here are the assignment and solution description from the Art example: “Draw a lifelike skull with emphasis on shading to develop and demonstrate your skills in observational drawing.

Solutions:

  • The skull dimensions and proportions are highly accurate.
  • Exceptional attention to fine details and textures.
  • Shading is skillfully applied to create a dynamic range of tones.
  • Light and shadow are used effectively to create a realistic sense of volume and space.
  • Drawing is well-composed with thoughtful consideration of the placement and use of space.”

See the website for more examples as well as answers and grades. Sure, these are all relevant skills. But evaluation should not stop at the limits of an AI’s understanding. An insightful interpretation in a work of art? Brilliant analysis in an essay? A fresh take on an historical event? Qualities like those take a skilled human teacher to spot, encourage, and develop. But soon there may be no room for such niceties in education. Maybe, someday, no room for human teachers at all. After all, software is cheaper and does not form pesky unions.

Most important, however, is that teaching is a bummer. Every child is exceptional. So argue with the robot that little Debbie got an F.

Cynthia Murrell, June 10, 2024

Publishers Sign Up for the Great Unknown: Risky, Oh, Yeah

June 7, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

OpenAI is paying for content. Why? Maybe to avoid lawsuits? Maybe to get access to “real” news to try to get ahead of its perceived rivals? Maybe because Sam AI-Man pushes forward while its perceived competitors do weird things like add features, launch services which are lousy, or which have the taste of the bitter fruit of Zuckus nepenthes.

image

Publishers are like beavers. Publishers have to do whatever they can to generate cash. Thanks, MSFT Copilot. Good enough. Not a cartoon and not a single dam, but just like MSFT security good enough, today’s benchmark of excellence.

Journalists Deeply Troubled by OpenAI’s Content Deals with Vox, The Atlantic” is a good example of the angst Sam AI-Man is causing among “real” news outfits and their Fourth Estate professionals. The write up reports:

“Alarmed” writers unions question transparency of AI training deals with ChatGPT maker.

Oh, oh. An echo of Google’s Code Red am I hearing? No, what I hear is the ka-ching of the bank teller’s deposit system as the “owner” of the Fourth Estate professional business process gets Sam AI-Man’s money. Let’s not confuse “real” news with “real” money, shall we? In the current economic climate, money matters. Today it is difficult to sell advertising unless one is a slam dunk monopoly with an ad sales system that is tough to beat. Today it is tough to get those who consume news via a podcast or a public Web site to subscribe. I think that the number I heard for conversions is something like one or two subscribers per 100 visitors on a really good day. Most days are not really good.

“Real” journalists can be unionized. The idea is that their services have to be protected from the lawyers and bean counters who run many high profile publishing outfit. The problem with unions is that these seek to limit what the proprietors can do in a largely unregulated capitalist set up like the one operating within the United States. In a long-forgotten pre-digital era, those in a union dust up in 1921 at Blair Mountain in my favorite state, West Virginia. Today, the union members are more likely to launch social media posts and hook up with a needy lawyering outfit.

Let me be clear. Some of the “real” journalists will find fame as YouTubers, pundits on what’s left of traditional TV or cable news programs, or by writing a book which catches the attention of Netflix. Most, however, will do gig work and migrate to employment adjacent to “real” news. The problem is that in any set of “real” journalists, the top 10 percent will be advantaged. The others may head to favelas, their parent’s basement, or a Sheetz parking lot in my favorite state for some chemical relief. Does that sound scary?

Think about this.

Sam AI-Man, according to the Observer’s story “Sam Altman Says OpenAI Doesn’t Fully Understand How GPT Works Despite Rapid Progress.” These money-focused publishers are signing up for something that not only do they not understand but the fellow who is surfing the crazy wave of smart software does not understand. But taking money and worrying about the future is not something publishing executives in their carpetlands think about. Money in hand is good. Worrying about the future, according to their life coach, is not worth the mental stress. It is go-go in a now-now moment.

I cannot foretell the future. If I could, I would not be an 80-year-old dinobaby sitting in my home office marveling at the downstream consequences of what amounts to a 2024 variant of the DR-LINK technology. I can offer a handful of hypotheses:

  1. “Real” journalists are going to find that publishers cut deals to get cash without thinking of the “real” journalists or the risks inherent in hopping in a small cabin with Sam AI-Man for a voyage in the unknown.
  2. Money and cost reductions will fuel selling information to Sam AI-Man and any other Big Tech outfit which comes calling with a check book. Money now is better than looking at a graph of advertising sales over the last five years. Money trumps “real” journalists’ complaints when they are offered part-time work or an opportunity to find their future elsewhere.
  3. Publishing outfits have never been technology adept, and I think that engineered blindness is now built into the companies’ management processes. Change is going to make publishing an interesting business. That’s good for consultants and bankruptcy specialists. It will not be so good for those who do not have golden parachutes or platinum flying cars.

Net net: What are the options for the “real” journalists’ unions? Lawyers, maybe. Social media posts. Absolutely. Will these prevent publishers from doing what publishers have to do? Nope.

Stephen E Arnold, June 7, 2024

Think You Know Which Gen Z Is What?

June 7, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

I had to look this up? A Gen Z was born when? A Gen Z was born between 1981 and 1996. In 2024, a person aged 28 to 43 is, therefore, a Gen Z. Who knew? The definition is important. I read “Shocking Survey: Nearly Half of Gen Z Live a Double Life Online.” What do you know? A nice suburb, lots of Gen Zs, and half of these folks are living another life online. Go to one of those hip new churches with kick-back names and half of the Gen Zs heads bowed in prayer are living a double life. For whom do those folks pray? Hit the golf club and look at the polo shirt clad, self-satisfied 28 to 43 year olds. Which self is which? The chat room Dark Web person or a happy golfer enjoying the 19th hole?

image

Someone who is older is jumping to conclusions. Those vans probably contain office supplies, toxic waste, or surplus government equipment. No one would take Gen Zs out of the flow, would they? Thanks, MSFT. Do you have Gen Zs working on your superlative security systems?

The write up reports:

A survey of 2,000 Americans, split evenly by generation, found that 46% of Gen Z respondents feel their personality online vastly differs from how they present themselves in the real world.

Only eight percent of the baby boomers are different online. New flash: If you ever meet me, I am the same person writing these blog posts. As an 80-year-old dinobaby, I don’t need another persona to baffle the brats in the social media sewer. I just avoid the sewer and remain true to my ageing self.

The write up also provides this glimpse into the hearts and souls of those 28 to 43:

Specifically, 31% of Gen Z respondents admitted their online world is a secret from family

That’s good. These Gen Zs can keep a secret. But why? What are they trying to hide from their family, friends, and co-workers? I can guess but won’t.

If you work with a Gen Z, here’s an allegedly valid factoid from the survey:

53% of Gen Zers said it’s easier to express themselves online than offline.

Want another? Too bad. Here’s a winner insight:

68 percent of Gen Zs sometimes feel a disconnect between who they are online and offline.

I think I took a psychology class when I was a freshman in college. I recall learning about a mental disorder with inconsistent or contradictory elements. Are Gen Zs schizophrenic? That’s probably the wrong term, but I think I am heading in the right direction. Mental disorder signals flashing. Just the Gen Z I want to avoid if possible.

One aspect of the write up in the article is that the “author” — maybe human, maybe AI, maybe Gen X with a grudge, who knows? — is that some explanation of who paid the bill to obtain data from 2,000 people. Okay, who paid the bill? Answer: Lenovo. What company conducted the study? Answer: OnePoll. (I never heard of the outfit, and I am too much of a dinobaby to care much.)

Net net: The Gen Zs seem to be a prime source of persons of interest for those investigating certain types of online crime. There you go.

Stephen E Arnold, June 6, 2024

Meta Deletes Workplace. Why? AI!

June 7, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

Workplace was Meta’s attempt to jump into the office-productivity ring and face off against the likes of Slack and MS Teams. It did not fare well. Yahoo Finance shares the brief write-up, “Meta Is Shuttering Workplace, Its Enterprise Version of Facebook.” The company is spinning the decision as a shift to bigger and better things. Bloomberg’s Kurt Wagner cites reporting from TechCrunch as she writes:

“The service operated much like the original Facebook social network, but let people have separate accounts for their work interactions. Workplace had as many as 7 million total paying subscribers in May 2021. … Meta once had ambitious plans for Workplace, and viewed it as a way to make money through subscriptions as well as a chance to extend Facebook’s reach by infusing the product into work and office settings. At one point, Meta touted a list of high-profile customers, including Starbucks Corp., Walmart Inc. and Spotify Technology SA. The company will continue to focus on workplace-related products, a spokesperson said, but in other areas, such as the metaverse by building features for the company’s Quest VR headsets.”

The Meta spokesperson repeated the emphasis on those future products, also stating:

“We are discontinuing Workplace from Meta so we can focus on building AI and metaverse technologies that we believe will fundamentally reshape the way we work.”

Meta will continue to use Workplace internally, but everyone else has until the end of August 2025 before the service ends. Meta plans to keep user data accessible until the end of May 2026. The company also pledges to help users shift to Zoom’s Workvivo platform. What, no forced migration into the Metaverse and their proprietary headsets? Not yet, anyway.

Cynthia Murrell, June 7, 2024

AI in the Newsroom

June 7, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

It seems much of the news we encounter is already, at least in part, generated by AI. Poynter discusses how “AI Is Already Reshaping Newsrooms, AP Study Finds.” The study asked 292 respondents from legacy media, public broadcasters, magazines, and other news outlets. Writer Alex Mahadevan summarizes:

“Nearly 70% of newsroom staffers from a variety of backgrounds and organizations surveyed in December say they’re using the technology for crafting social media posts, newsletters and headlines; translation and transcribing interviews; and story drafts, among other uses. One-fifth said they’d used generative AI for multimedia, including social graphics and videos.”

Surely these professionals are only using these tools under meticulous guidelines, right? Well, a few are. We learn:

“The tension between ethics and innovation drove Poynter’s creation of an AI ethics starter kit for newsrooms last month. The AP — which released its own guidelines last August — found less than half of respondents have guidelines in their newsrooms, while about 60% were aware of some guidelines about the use of generative AI.”

The survey found the idea of guidelines was not even on most respondents’ minds. That is unsettling. Mahadevan lists some other interesting results:

“*54% said they’d ‘maybe’ let AI companies train their models using their content.

*49% said their workflows have already changed because of generative AI.

*56% said the AI generation of entire pieces of content should be banned.

*Only 7% of those who responded were worried about AI displacing jobs.

*18% said lack of training was a big challenge for ethical use of AI. ‘Training is lovely, but time spent on training is time not spent on journalism — and a small organization can’t afford to do that,’ said one respondent.”

That last statement is disturbing, given the gradual deterioration and impoverishment of large news outlets. How can we ensure best practices make their way into this mix, and can it be done before any news may be fake news?

Cynthia Murrell, June 7, 2024

OpenAI: Deals with Apple and Microsoft Squeeze the Google

June 6, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

Do you remember your high school biology class? You may have had a lab partner, preferably a person with dexterity and a steady hand. Dissecting creatures and having recognizable parts was important. Otherwise, how could one identify the components when everything was a glutinous mash up of white, red, pink, gray, and — yes — even green?

That’s how I interpret the OpenAI deals the company has with Apple and Microsoft. What are these two large, cash-rich, revenue hungry companies going to do? The illustration suggest that the two was to corral Googzilla, put the beastie in a stupor, and then take the creature apart.

image

The little Googzilla is in the lab. Two wizards are going to try to take the creature apart. One of the bio-data operators is holding tweezers to grab the beastie and place it on an adhesive gel pad. The other is balancing the creature to reassure it that it may once again be allowed to roam free in a digital Roatan. The bio-data experts may have another idea. Thanks, MSFT. Did you know you are the character with the tweezers?

Well, maybe the biology lab metaphor is not appropriate. Oh, heck, I am going to stick with the trope. Microsoft has rammed Copilot and its other AI deals in front of Windows users world wide. Now Apple, late to the AI game, went to the AI dance hall and picked the star-crossed OpenAI as a service it would take to the smart software recital.

If you want to get some color about Apple and OpenAI, navigate to “Apple and OpenAI Allegedly Reach Deal to Bring ChatGPT Functionality to iOS 18.”

I want to focus on what happens before the lab partners try to chop up the little Googzilla.

Here are the steps:

  1. Use tweezers to grab the beastie
  2. Squeeze the tweezers to prevent the beastie from escaping to the darkness under the lab cabinets
  3. Gently lift the beastie
  4. Place the beastie on the adhesive gel.

I will skip the part of process which involves anesthetizing the beastie and beginning the in vivo procedures. Just use your imagination.

Now back to the four steps. My view is that neither Apple nor Microsoft will actively cooperate to make life difficult for the baby Googzilla, which represents a fledgling smart software activity. Here’s my vision.

Apple will do what Apple does, just with OpenAI and ChatGPT. At some point, Apple, which is a kind and gentle outfit, may not chop off Googzilla’s foot. Apple may offer the beastie a reprieve. After all, Apple knows Google will pay big bucks to be the default search engine for Safari. The foot remains attached, but there is some shame attached at being number two. No first prize, just a runner up: How is that for a creature who views itself as the world’s smartest, slickest, most wonderfulest entity? Answer: Bad.

The squeezing will be uncomfortable. But what can the beastie do. The elevation causes the beastie to become lightheaded. Its decision making capability, already suspect, becomes more addled and unpredictable.

Then the adhesive gel. Mobility is impaired. Fear causes the beastie’s heart to pound. The beastie becomes woozy. The beastie is about to wonder if it will survive.

To sum up the situation: The Google is hampered by:

  1. A competitor in AI which has cut deals that restrict Google to some degree
  2. The parties to the OpenAI deal are out for revenue which is thicker than blood
  3. Google has demonstrated a loss of some management capability and that may deteriorate at a more rapid pace.

Today’s world may be governed by techno-feudalists, and we are going to get a glimpse of what happens when a couple of these outfits tag team a green beastie. This will be an interesting situation to monitor.

Stephen E Arnold, June 6, 2024

Large Dictators. Name the Largest

June 6, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

I read “Social Media Bosses Are the Largest Dictators, Says Nobel Peace Prize Winner.” I immediately thought of “fat” dictators; for example, Benito Mussolini, but I may have him mixed up with Charles Laughton in “Mutiny on the Bounty.”

image

A mother is trying to implement the “keep your kids off social media” recommendation. Thanks, MSFT Copilot. Good enough.

I think the idea intended is something along the lines of “unregulated companies and their CEOs have more money and power than some countries. These CEOs act like dictators on a par with Julius Caesar. Brutus and friends took out Julius, but the heads of technopolies are indifferent to laws, social norms, and the limp limbs of ethical behavior.”

That’s a lot of words. Ergo: Largest dictators is close enough for horseshoes. It is 2024, and no one wants old-fashioned ideas like appropriate business activities to get in the way of making money and selling online advertising.

The write up shares the quaint ideas of a Noble Peace Prize winner. Here are the main points about social media and technology by someone who is interested in peace:

  1. Tech bros are dictators with considerable power over information and ideas
  2. Tech bros manipulate culture, language, and behavior
  3. The companies these dictators runs “change the way we feel” and “change the way we see the world and change the way we act”

I found this statement from the article suggestive:

“In the Philippines, it was rich versus poor. In the United States, it’s race,” she said. “Black Lives Matter … was bombarded on both sides by Russian propaganda. And the goal was not to make people believe one thing. The goal was to burst this wide open to create chaos.”  The way tech companies are “inciting polarization, inciting fear and anger and hatred” changes us “at a personal level, a societal level”, she said.

What’s the fix? A speech? Two actions are needed:

  1. Dump the protection afforded the dictators by the 1996 Communications Decency Act
  2. Prevent children from using social media.

Now it is time for a reality check. Changing the Communications Decency Act will take some time. Some advocates have been chasing this legal Loch Ness monster for years. The US system is sensitive to “laws” and lobbyists. Change is slow and regulations are often drafted by lobbyists. Therefore, don’t hold your breath on revising the CDA by the end of the week.

Second, go to a family-oriented restaurant in the US. How many of the children have mobile phones? Now, be a change expert, and try to get the kids at a nearby table to give you their mobile devices. Let me know how that works out, please.

Net net: The Peace Prize winner’s ideas are interesting. That’s about it. And the fat dictators? Keto diets and chemicals do the trick.

Stephen E Arnold, June 6, 2024

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta