Meta Deletes Workplace. Why? AI!

June 7, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

Workplace was Meta’s attempt to jump into the office-productivity ring and face off against the likes of Slack and MS Teams. It did not fare well. Yahoo Finance shares the brief write-up, “Meta Is Shuttering Workplace, Its Enterprise Version of Facebook.” The company is spinning the decision as a shift to bigger and better things. Bloomberg’s Kurt Wagner cites reporting from TechCrunch as she writes:

“The service operated much like the original Facebook social network, but let people have separate accounts for their work interactions. Workplace had as many as 7 million total paying subscribers in May 2021. … Meta once had ambitious plans for Workplace, and viewed it as a way to make money through subscriptions as well as a chance to extend Facebook’s reach by infusing the product into work and office settings. At one point, Meta touted a list of high-profile customers, including Starbucks Corp., Walmart Inc. and Spotify Technology SA. The company will continue to focus on workplace-related products, a spokesperson said, but in other areas, such as the metaverse by building features for the company’s Quest VR headsets.”

The Meta spokesperson repeated the emphasis on those future products, also stating:

“We are discontinuing Workplace from Meta so we can focus on building AI and metaverse technologies that we believe will fundamentally reshape the way we work.”

Meta will continue to use Workplace internally, but everyone else has until the end of August 2025 before the service ends. Meta plans to keep user data accessible until the end of May 2026. The company also pledges to help users shift to Zoom’s Workvivo platform. What, no forced migration into the Metaverse and their proprietary headsets? Not yet, anyway.

Cynthia Murrell, June 7, 2024

AI in the Newsroom

June 7, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

It seems much of the news we encounter is already, at least in part, generated by AI. Poynter discusses how “AI Is Already Reshaping Newsrooms, AP Study Finds.” The study asked 292 respondents from legacy media, public broadcasters, magazines, and other news outlets. Writer Alex Mahadevan summarizes:

“Nearly 70% of newsroom staffers from a variety of backgrounds and organizations surveyed in December say they’re using the technology for crafting social media posts, newsletters and headlines; translation and transcribing interviews; and story drafts, among other uses. One-fifth said they’d used generative AI for multimedia, including social graphics and videos.”

Surely these professionals are only using these tools under meticulous guidelines, right? Well, a few are. We learn:

“The tension between ethics and innovation drove Poynter’s creation of an AI ethics starter kit for newsrooms last month. The AP — which released its own guidelines last August — found less than half of respondents have guidelines in their newsrooms, while about 60% were aware of some guidelines about the use of generative AI.”

The survey found the idea of guidelines was not even on most respondents’ minds. That is unsettling. Mahadevan lists some other interesting results:

“*54% said they’d ‘maybe’ let AI companies train their models using their content.

*49% said their workflows have already changed because of generative AI.

*56% said the AI generation of entire pieces of content should be banned.

*Only 7% of those who responded were worried about AI displacing jobs.

*18% said lack of training was a big challenge for ethical use of AI. ‘Training is lovely, but time spent on training is time not spent on journalism — and a small organization can’t afford to do that,’ said one respondent.”

That last statement is disturbing, given the gradual deterioration and impoverishment of large news outlets. How can we ensure best practices make their way into this mix, and can it be done before any news may be fake news?

Cynthia Murrell, June 7, 2024

OpenAI: Deals with Apple and Microsoft Squeeze the Google

June 6, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

Do you remember your high school biology class? You may have had a lab partner, preferably a person with dexterity and a steady hand. Dissecting creatures and having recognizable parts was important. Otherwise, how could one identify the components when everything was a glutinous mash up of white, red, pink, gray, and — yes — even green?

That’s how I interpret the OpenAI deals the company has with Apple and Microsoft. What are these two large, cash-rich, revenue hungry companies going to do? The illustration suggest that the two was to corral Googzilla, put the beastie in a stupor, and then take the creature apart.

image

The little Googzilla is in the lab. Two wizards are going to try to take the creature apart. One of the bio-data operators is holding tweezers to grab the beastie and place it on an adhesive gel pad. The other is balancing the creature to reassure it that it may once again be allowed to roam free in a digital Roatan. The bio-data experts may have another idea. Thanks, MSFT. Did you know you are the character with the tweezers?

Well, maybe the biology lab metaphor is not appropriate. Oh, heck, I am going to stick with the trope. Microsoft has rammed Copilot and its other AI deals in front of Windows users world wide. Now Apple, late to the AI game, went to the AI dance hall and picked the star-crossed OpenAI as a service it would take to the smart software recital.

If you want to get some color about Apple and OpenAI, navigate to “Apple and OpenAI Allegedly Reach Deal to Bring ChatGPT Functionality to iOS 18.”

I want to focus on what happens before the lab partners try to chop up the little Googzilla.

Here are the steps:

  1. Use tweezers to grab the beastie
  2. Squeeze the tweezers to prevent the beastie from escaping to the darkness under the lab cabinets
  3. Gently lift the beastie
  4. Place the beastie on the adhesive gel.

I will skip the part of process which involves anesthetizing the beastie and beginning the in vivo procedures. Just use your imagination.

Now back to the four steps. My view is that neither Apple nor Microsoft will actively cooperate to make life difficult for the baby Googzilla, which represents a fledgling smart software activity. Here’s my vision.

Apple will do what Apple does, just with OpenAI and ChatGPT. At some point, Apple, which is a kind and gentle outfit, may not chop off Googzilla’s foot. Apple may offer the beastie a reprieve. After all, Apple knows Google will pay big bucks to be the default search engine for Safari. The foot remains attached, but there is some shame attached at being number two. No first prize, just a runner up: How is that for a creature who views itself as the world’s smartest, slickest, most wonderfulest entity? Answer: Bad.

The squeezing will be uncomfortable. But what can the beastie do. The elevation causes the beastie to become lightheaded. Its decision making capability, already suspect, becomes more addled and unpredictable.

Then the adhesive gel. Mobility is impaired. Fear causes the beastie’s heart to pound. The beastie becomes woozy. The beastie is about to wonder if it will survive.

To sum up the situation: The Google is hampered by:

  1. A competitor in AI which has cut deals that restrict Google to some degree
  2. The parties to the OpenAI deal are out for revenue which is thicker than blood
  3. Google has demonstrated a loss of some management capability and that may deteriorate at a more rapid pace.

Today’s world may be governed by techno-feudalists, and we are going to get a glimpse of what happens when a couple of these outfits tag team a green beastie. This will be an interesting situation to monitor.

Stephen E Arnold, June 6, 2024

Selling AI with Scare Tactics

June 6, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

Ah, another article with more assertions to make workers feel they must adopt the smart software that threatens their livelihoods. AI automation firm UiPath describes “3 Common Barriers to AI Adoption and How to Overcome Them.” Before marketing director Michael Robinson gets to those barriers, he tries to motivate readers who might be on the fence about AI. He writes:

“There’s a growing consensus about the need for businesses to embrace AI. McKinsey estimated that generative AI could add between $2.6 to $4.4 trillion in value annually, and Deloitte’s ’State of AI in the Enterprise’ report found that 94% of surveyed executives ‘agree that AI will transform their industry over the next five years.’ The technology is here, it’s powerful, and innovators are finding new use cases for it every day. But despite its strategic importance, many companies are struggling to make progress on their AI agendas. Indeed, in that same report, Deloitte estimated that 74% of companies weren’t capturing sufficient value from their AI initiatives. Nevertheless, companies sitting on the sidelines can’t afford to wait any longer. As reported by Bain & Company, a ‘larger wedge’ is being driven ‘between those organizations that have a plan [for AI] and those that don’t—amplifying advantage and placing early adopters into stronger positions.’”

Oh, no! What can the laggards do? Fret not, the article outlines the biggest hurdles: lack of a roadmap, limited in-house expertise, and security or privacy concerns. Curious readers can see the post for details about each. As it happens, software like UiPath’s can help businesses clear every one. What a coincidence.

Cynthia Murrell, June 6, 2024

Does Google Follow Its Own Product Gameplan?

June 5, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

If I were to answer the question based on Google’s AI summaries, I would say, “Nope.” The latest joke added to the Sundar & Prabhakar Comedy Show is the one about pizza. Here’s the joke if I recall it correctly.

Sundar: Yo, Prabhakar, how do you keep cheese from slipping off a hot pizza?

Prabhakar: I don’t know. Please, tell me, oh gifted one.

Sundar: You have your cook mix it with non-toxic glue, faithful colleague.

Prabhakar: [Laughing loudly]. That’s a good one, luminescent soul.

Did Google muff the bunny with its high-profile smart software feature? To answer the question, I looked to the ever-objective Fast Company online publication. I found a write which appears to provide some helpful information. The article is called “Conduct Stellar User Research Even Faster with This Google Ventures Formula.” Google has game plans for creating MVPs or minimum viable products.

image

The confident comedians look concerned when someone in the audience throws a large tomato at the well-paid performers. Thanks, MSFT. Working on security or the AI PC today?

Let’s look at what one Google partner reveals as the equivalent of the formula for Coca-Cola or McDonald’s recipe for Big Mac sauce.

Here’s the game winning touchdown razzle dazzle:

  1. Use a bullseye customer sprint. The idea is to get five “customers” and show them three prototypes. Listen for pros and cons. Then debrief together in a “watch party.”
  2. Conduct sprints early. The idea is to get this feedback before “a team invests a lot of time, money, or reputational risk into building, launching, and marketing an MVP (that’s a minimum viable product, not necessarily a good or needed product I think).
  3. Keep research bite size. Avoid heavy duty research overkill is the way I interpret the Google speak. The idea is that massive research projects are not desirable. They are work. Nibble, don’t gobble, I assume.
  4. Keep the process simple. Keep the prototypes simple. Get those interviews. That’s fun. Plus, there is the “watch party”, remember?

Okay, now let’s think about what Google suggests are outliers or fiddled AI results. Why is Google AI telling people to eat a rock a day?

The “bullseye” baloney is bull output for sure. I am on reasonably firm ground because in Paris the Sundar & Prabhakar Comedy Act showed incorrect outputs from Google’s AI system. Then Google invented about a dozen variations on the theme of a scrambled egg at Google I/O. Now Google is faced with its AI system telling people dogs own hotels. No, some dogs live in hotels. Some dogs deliver outputs in hotels. Dogs do not own hotels unless it is in a crazy virtual reality headset created by Apple or Meta.

The write up uses the word “stellar” to describe this MVP product stuff. The reality is that Googlers are creating work for themselves. Listening to “customers” who know little about AI or anything other than buy ad-get traffic. The “stellar” part of the title is like the “quantum supremacy” horse feather assertion the company crafted.

Smart software can, when trained and managed, can do some useful things. However, the bullseye and quantum supremacy stuff is capable of producing social media memes, concern among some stakeholders, and evidence that Google cannot do anything useful at this time.

Maybe the company will get its act together? When it does, I will check out the next Sundar & Prabhakar Comedy Act. Maybe some of the jokes will work? Let’s hope they are more effective than the bull’s-eye method. (Sorry. I had to fix up the spelling, Google.)

Stephen E Arnold, June 5, 2024

Allegations of Personal Data Flows from X.com to Au10tix

June 4, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

I work from my dinobaby lair in rural Kentucky. What the heck to I know about Hod HaSharon, Israel? The answer is, “Not much.” However, I read an online article called “Elon Musk Now Requiring All X Users Who Get Paid to Send Their Personal ID Details to Israeli Intelligence-Linked Corporation.”I am not sure if the statements in the write up are accurate. I want to highlight some items from the write up because I have not seen information about this interesting identify verification process in my other feeds. This could be the second most covered news item in the last week or two. Number one goes to Google’s telling people to eat a rock a day and its weird “not our fault” explanation of its quantumly supreme technology.

Here’s what I carried away from this X to Au10tix write up. (A side note: Intel outfits like obscure names. In this case, Au10tix is a cute conversion of the word authentic to a unique string of characters. Aw ten tix. Get it?)

Yes, indeed. There is an outfit called Au10tix, and it is based about 60 miles north of Jerusalem, not in the intelware capital of the world Tel Aviv. The company, according to the cited write up, has a deal with Elon Musk’s X.com. The write up asserts:

X now requires new users who wish to monetize their accounts to verify their identification with a company known as Au10tix. While creator verification is not unusual for online platforms, Elon Musk’s latest move has drawn intense criticism because of Au10tix’s strong ties to Israeli intelligence. Even people who have no problem sharing their personal information with X need to be aware that the company they are using for verification is connected to the Israeli government. Au10tix was founded by members of the elite Israeli intelligence units Shin Bet and Unit 8200.

Sounds scary. But that’s the point of the article. I would like to remind you, gentle reader, that Israel’s vaunted intelligence systems failed as recently as October 2023. That event was described to me by one of the country’s former intelligence professionals as “our 9/11.” Well, maybe. I think it made clear that the intelware does not work as advertised in some situations. I don’t have first-hand information about Au10tix, but I would suggest some caution before engaging in flights of fancy.

The write up presents as actual factual information:

The executive director of the Israel-based Palestinian digital rights organization 7amleh, Nadim Nashif, told the Middle East Eye: “The concept of verifying user accounts is indeed essential in suppressing fake accounts and maintaining a trustworthy online environment. However, the approach chosen by X, in collaboration with the Israeli identity intelligence company Au10tix, raises significant concerns. “Au10tix is located in Israel and both have a well-documented history of military surveillance and intelligence gathering… this association raises questions about the potential implications for user privacy and data security.” Independent journalist Antony Loewenstein said he was worried that the verification process could normalize Israeli surveillance technology.

What the write up did not significant detail. The write up reports:

Au10tix has also created identity verification systems for border controls and airports and formed commercial partnerships with companies such as Uber, PayPal and Google.

My team’s research into online gaming found suggestions that the estimable 888 Holdings may have a relationship with Au10tix. The company pops up in some of our research into facial recognition verification. The Israeli gig work outfit Fiverr.com seems to be familiar with the technology as well. I want to point out that one of the Fiverr gig workers based in the UK reported to me that she was no longer “recognized” by the Fiverr.com system. Yeah, October 2023 style intelware.

Who operates the company? Heading back into my files, I spotted a few names. These individuals may no longer involved in the company, but several names remind me of individuals who have been active in the intelware game for a few years:

  • Ron Atzmon: Chairman (Unit 8200 which was not on the ball on October 2023 it seems)
  • Ilan Maytal: Chief Data Officer
  • Omer Kamhi: Chief Information Security Officer
  • Erez Hershkovitz: Chief Financial Officer (formerly of the very interesting intel-related outfit Voyager Labs, a company about which the Brennan Center has a tidy collection of information related to the LAPD)

The company’s technology is available in the Azure Marketplace. That description identifies three core functions of Au10tix’ systems:

  1. Identity verification. Allegedly the system has real-time identify verification. Hmm. I wonder why it took quite a bit of time to figure out who did what in October 2023. That question is probably unfair because it appears no patrols or systems “saw” what was taking place. But, I should not nit pick. The Azure service includes a “regulatory toolbox including disclaimer, parental consent, voice and video consent, and more.” That disclaimer seems helpful.
  2. Biometrics verification. Again, this is an interesting assertion. As imagery of the October 2023 emerged I asked myself, “How did that ID to selfie, selfie to selfie, and selfie to token matches” work? Answer: Ask the families of those killed.
  3. Data screening and monitoring. The system can “identify potential risks and negative news associated with individuals or entities.” That might be helpful in building automated profiles of individuals by companies licensing the technology. I wonder if this capability can be hooked to other Israeli spyware systems to provide a particularly helpful, real-time profile of a person of interest?

Let’s assume the write up is accurate and X.com is licensing the technology. X.com — according to “Au10tix Is an Israeli Company and Part of a Group Launched by Members of Israel’s Domestic Intelligence Agency, Shin Bet” — now includes this

image

The circled segment of the social media post says:

I agree to X and Au10tix using images of my ID and my selfie, including extracted biometric data to confirm my identity and for X’s related safety and security, fraud prevention, and payment purposes. Au10tix may store such data for up to 30 days. X may store full name, address, and hashes of my document ID number for as long as I participate in the Creator Subscription or Ads Revenue Share program.

This dinobaby followed the October 2023 event with shock and surprise. The dinobaby has long been a champion of Israel’s intelware capabilities, and I have done some small projects for firms which I am not authorized to identify. Now I am skeptical and more critical. What if X’s identity service is compromised? What if the servers are breached and the data exfiltrated? What if the system does not work and downstream financial fraud is enabled by X’s push beyond short text messaging? Much intelware is little more than glorified and old-fashioned search and retrieval.

Does Mr. Musk or other commercial purchasers of intelware know about cracks and fissures in intelware systems which allowed the October 2023 event to be undetected until live-fire reports arrived? This tie up is interesting and is worth monitoring.

Stephen E Arnold, June 4, 2024

AI Will Not Definitely, Certainly, Absolute Not Take Some Jobs. Whew. That Is News

June 3, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

Outfits like McKinsey & Co. are kicking the tires of smart software. Some bright young sprouts I have heard arrive with a penchant for AI systems to create summaries and output basic information on a subject the youthful masters of the universe do not know. Will consulting services firms, publishers, and customer service outfits embrace smart software? The answer is, “You bet your bippy.”

“Why?” Answer: Potential cost savings. Humanoids require vacations, health care, bonuses, pension contributions (ho ho ho), and an old-fashioned and inefficient five-day work week.

image

Cost reductions over time, cost controls in real time, and more consistent outputs mean that as long as smart software is good enough, the technologies will go through organizations with more efficiency than Union General William T. Sherman led some 60,000 soldiers on a 285-mile march from Atlanta to Savannah, Georgia. Thanks, MSFT Copilot. Working on security today?

Software is allegedly better, faster, and cheaper. Software, particularly AI, may not be better, faster, or cheaper. But once someone is fired, the enthusiasm to return to the fold may be diminished. Often the response is a semi-amusing and often negative video posted on social media.

Here’s Why AI Probably Isn’t Coming for Your Job Anytime Soon” disagrees with my fairly conservative prediction that consulting, publishing, and some service outfits will be undergoing what I call “humanoid erosion” and “AI accretion.” The write up asserts:

We live in an age of hyper specialization. This is a trend that’s been evolving for centuries. In his seminal work, The Wealth of Nations (written within months of the signing of the Declaration of Independence), Adam Smith observed that economic growth was primarily driven by specialization and division of labor. And specialization has been a hallmark of computing technology since its inception. Until now. Artificial intelligence (AI) has begun to alter, even reverse, this evolution.

Okay, Econ 101. Wonderful. But… and there are some, of course. the write up says:

But the direction is clear. While society is moving toward ever more specialization, AI is moving in the opposite direction and attempting to replicate our greatest evolutionary advantage—adaptability.

Yikes. I am not sure that AI is going in any direction. Senior managers are going toward reducing costs. “Good enough,” not excellence, is the high-water mark today.

Here’s another “but”:

But could AI take over the bulk of legal work or is there an underlying thread of creativity and judgment of the type only speculative super AI could hope to tackle? Put another way, where do we draw the line between general and specific tasks we perform? How good is AI at analyzing the merits of a case or determining the usefulness of a specific document and how it fits into a plausible legal argument? For now, I would argue, we are not even close.

I don’t remember much about economics. In fact, I only think about economics in terms of reducing costs and having more money for myself. Good old Adam wrote:

Wherever there is great property there is great inequality. For one very rich man, there must be at least five hundred poor, and the affluence of the few supposes the indigence of the many.

When it comes to AI, inequality is baked in. The companies that are competing fiercely to dominate the core technology are not into equality. The senior managers who want to reduce costs associated with publishing, writing consulting reports based on business school baloney, or reviewing documents hunting for nuggets useful in a trial. AI is going into these and similar knowledge professions. Most of those knowledge workers will have an opportunity to find their future elsewhere. But what about in-take professionals in hospitals? What about dispatchers at trucking companies? What about government citizen service jobs? Sorry. Software is coming. Companies are developing orchestrator software to allow smart software to function across multiple related and inter-related tasks. Isn’t that what most work in a many organizations is?

Here’s another test question from Econ 101:

Discuss the meaning of “It was not by gold or by silver, but by labor, that all wealth of the world was originally purchased.” Give examples of how smart software will replace labor and generate more money for those who own the rights to digital gold or silver.

Send me you blue book answers within 24 hours. You must write in legible cursive. You are not permitted to use artificial intelligence in any form to answer this question which counts for 95 percent of your grade in Economics 102: Work in the Age of AI.

Stephen E Arnold, June 3, 2024

Price Fixing Is Price Fixing with or without AI

June 3, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

Small time landlords, such as mom and pops who invested in property for retirement, shouldn’t be compared to large, corporate landlords. The corporate landlords, however, give them all a bad name. Why? Because of actions like price fixing. ProPublicia details how politicians are fighting against the bad act: “We Found That Landlords Could Be Using Algorithms To Fix Rent Prices. Now Lawmakers Want To make The Practice Illegal.”

RealPage sells software programmed with AI algorithm that collect rent data and recommends how much landlords should charge. Lawmakers want to ban AI-base price fixing so landlords won’t become cartels that coordinate pricing. RealPage and its allies defend the software while lawmakers introduced a bill to ban it.

The FTC also states that AI-based real estate software has problems: “Price Fixing By Algorithm Is Still Price Fixing.” The FTC isn’t against technology. They’re against technology being used as a tool to cheat consumers:

“Meanwhile, landlords increasingly use algorithms to determine their prices, with landlords reportedly using software like “RENTMaximizer” and similar products to determine rents for tens of millions(link is external) of apartments across the country. Efforts to fight collusion are even more critical given private equity-backed consolidation(link is external) among landlords and property management companies. The considerable leverage these firms already have over their renters is only exacerbated by potential algorithmic price collusion. Algorithms that recommend prices to numerous competing landlords threaten to remove renters’ ability to vote with their feet and comparison-shop for the best apartment deal around.”

This is an example of how to use AI for evil. The problem isn’t the tool it’s the humans using it.

Whitney Grace, June 3, 2024

So AI Is — Maybe, Just Maybe — Not the Economic Big Kahuna?

June 3, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

I find it amusing how AI has become the go-to marketing word. I suppose if I were desperate, lacking an income, unsure about what will sell, and a follow-the-hyperbole-type person I would shout, “AI.” Instead I vocalize, “Ai-Yai-Ai” emulating the tones of a Central American death whistle. Yep, “Ai-Yai-AI.”

image

Thanks, MSFT Copilot. A harbinger? Good enough.

I read “MIT Professor Hoses Down Predictions AI Will Put a Rocket under the Economy.” I won’t comment upon the fog of distrust which I discern around Big Name Universities, nor will I focus my adjustable Walgreen’s spectacles on MIT’s fancy dancing with the quite interesting and decidedly non-academic Jeffrey Epstein. Nope. Forget those two factoids.

The write up reports:

…Daron Acemoglu, professor of economics at Massachusetts Institute of Technology, argues that predictions AI will improve productivity and boost wages in a “blue-collar bonanza” are overly optimistic.

The good professor is rowing against the marketing current. According to the article, the good professor identifies some wild and crazy forecasts. One of these is from an investment bank whose clients are unlikely to be what some one percenters perceive as non-masters of the universe.

That’s interesting. But it pales in comparison to the information in “Few People Are Using ChatGPT and Other AI Tools Regularly, Study Suggests.” (I love suggestive studies!) That write up reports about a study involving Thomson Reuters, the “trust” outfit:

Carried out by the Reuters Institute and Oxford University and involving 6,000 respondents from the U.S., U.K., France, Denmark, Japan, and Argentina, the researchers found that OpenAI’s ChatGPT is by far the most widely used generative-AI tool and is two or three times more widespread than the next most widely used products — Google Gemini and Microsoft Copilot. But despite all the hype surrounding generative AI over the last 18 months, only 1% of those surveyed are using ChatGPT on a daily basis in Japan, 2% in France and the UK, and 7% in the U.S. The study also found that between 19% and 30% of the respondents haven’t even heard of any of the most popular generative AI tools, and while many of those surveyed have tried using at least one generative-AI product, only a very small minority are, at the current time, regular users deploying them for a variety of tasks.

My hunch is that these contrarians want clicks. Well, the tactic worked for me. However, how many of those in AI-Land will take note? My thought is that these anti-AI findings are likely to be ignored until some of the Big Money folks lose their cash. Then the voices of negativity will be heard.

Several observations:

  1. The economics of AI seem similar to some early online ventures like Pets.com, not “all” mind you, just some
  2. Expertise in AI may not guarantee a job at a high-flying techno-feudalist outfit
  3. The difficulties Google appears to be having suggest that the road to AI-Land on the information superhighway may have some potholes. (If Google cannot pull AI off, how can Bob’s Trucking Company armed with Microsoft Word with Copilot?)

Net net: It will be interesting to monitor the frequency of “AI balloon deflating” analyses.

Stephen E Arnold,  June 3, 2024

x

Google: Lost in Its Own AI Maze

May 31, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

One “real” news items caught my attention this morning. Let me tell you. Even with the interesting activities in the Manhattan court, these jumped at me. Let’s take a quick look and see if Googzilla (see illustration) can make a successful exit from the AI maze in which the online advertising giant finds itself.

image

Googzilla is lost in its own AI maze. Can it find a way out? Thanks, MSFT Copilot. Three tries and I got a lizard in a maze. Keep allocating compute cycles to security because obviously Copilot is getting fewer and fewer these days.

Google Pins Blame on Data Voids for Bad AI Overviews, Will Rein Them In” makes it clear that Google is not blaming itself for some of the wacky outputs its centerpiece AI function has been delivering. I won’t do the guilty-34-times thing. I will just mention the non-toxic glue and pizza item. This news story reports:

Google thinks the AI Overviews for its search engine are great, and is blaming viral screenshots of bizarre results on "data voids" while claiming some of the other responses are actually fake. In a Thursday post, Google VP and Head of Google Search Liz Reid doubles down on the tech giant’s argument that AI Overviews make Google searches better overall—but also admits that there are some situations where the company "didn’t get it right."

So let’s look at that Google blog post titled “AI Overviews: About Last Week.”

How about this statement?

User feedback shows that with AI Overviews, people have higher satisfaction with their search results, and they’re asking longer, more complex questions that they know Google can now help with. They use AI Overviews as a jumping off point to visit web content, and we see that the clicks to webpages are higher quality — people are more likely to stay on that page, because we’ve done a better job of finding the right info and helpful webpages for them.

The statement strikes me as something that a character would say in an episode of the Twilight Zone, a TV series in the 50s and 60s. The TV show had a weird theme, and I thought I heard it playing when I read the official Googley blog post. Is this the Google “bullseye” method or a bullsh*t method?

The official Googley blog post notes:

This means that AI Overviews generally don’t “hallucinate” or make things up in the ways that other LLM products might. When AI Overviews get it wrong, it’s usually for other reasons: misinterpreting queries, misinterpreting a nuance of language on the web, or not having a lot of great information available. (These are challenges that occur with other Search features too.) This approach is highly effective. Overall, our tests show that our accuracy rate for AI Overviews is on par with another popular feature in Search — featured snippets — which also uses AI systems to identify and show key info with links to web content.

Okay, we are into bullsh*t method. Google search is now a key moment in the Sundar & Prabhakar Comedy Act. Since the début in Paris which featured incorrect data, the Google has been in Code Red or Red Alert of red faced-embarrassment mode. Now the company wants people to eat rocks, and it is not the online advertising giant’s fault. The blog post explains:

There isn’t much web content that seriously contemplates that question, either. This is what is often called a “data void” or “information gap,” where there’s a limited amount of high quality content about a topic. However, in this case, there is satirical content on this topic … that also happened to be republished on a geological software provider’s website. So when someone put that question into Search, an AI Overview appeared that faithfully linked to one of the only websites that tackled the question. In other examples, we saw AI Overviews that featured sarcastic or troll-y content from discussion forums. Forums are often a great source of authentic, first-hand information, but in some cases can lead to less-than-helpful advice, like using glue to get cheese to stick to pizza.

Okay, I think one component of the bullsh*t method is that it is not Google’s fault. “Users” — not customers because Google has advertising clients, partners, and some lobbyists. Everyone else is a user, and it is users’ fault, the data creators’ fault, and probably Sam AI-Man’s fault. (Did I omit anyone on whom to blame the let them “eat rocks” result?)

And the Google cares. This passage is worthy of a Hallmark card with a foldout:

At the scale of the web, with billions of queries coming in every day, there are bound to be some oddities and errors. We’ve learned a lot over the past 25 years about how to build and maintain a high-quality search experience, including how to learn from these errors to make Search better for everyone. We’ll keep improving when and how we show AI Overviews and strengthening our protections, including for edge cases, and we’re very grateful for the ongoing feedback.

What’s my take on this?

  1. The assumption that Google search is “good” is interesting, just not in line with what I hear, read, and experience when I do use Google. Note: That my personal usage has decreased over time.
  2. Google is trying to explain away its obvious flaws. The Google speak may work for some people, just not for me.
  3. The tone is that of a entitled seventh-grader from a wealthy family, not the type of language I find particularly helpful when the “smart” Google software has to be remediated by humans. Google is terminating humans, right? Now Google needs humans. What’s up Google?

Net net: Google is snagged it ins own AI maze. I am growing less confident in the company’s ability to extricate itself. The Sam AI-Man has crafted deals with two outfits big enough to make Google’s life more interesting. Google’s own management seems ineffectual despite the flashing red and yellow lights and the honking of alarms. Google’s wordsmiths and lawyers are running out of verbal wiggle room. But most important, the failure of the bullseye method and the oozing comfort of the bullsh*it method marks a turning point for the company.

Stephen E Arnold, May 31, 2024

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta