Anti-AI Fact Checking. What?

November 21, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

If this effort is sincere, at least one news organization is taking AI’s ability to generate realistic fakes seriously. Variety briefly reports, “CBS Launches Fact-Checking News Unit to Examine AI, Deepfakes, Misinformation.” Aptly dubbed “CBS News Confirmed,” the unit will be led by VPs Claudia Milne and Ross Dagan. Writer Brian Steinberg tells us:

“The hope is that the new unit will produce segments on its findings and explain to audiences how the information in question was determined to be fake or inaccurate. A July 2023 research note from the Northwestern Buffett Institute for Global Affairs found that the rapid adoption of content generated via A.I. ‘is a growing concern for the international community, governments and the public, with significant implications for national security and cybersecurity. It also raises ethical questions related to surveillance and transparency.’”

Why yes, good of CBS to notice. And what will it do about it? We learn:

“CBS intends to hire forensic journalists, expand training and invest in new technology, [CBS CEO Wendy] McMahon said. Candidates will demonstrate expertise in such areas as AI, data journalism, data visualization, multi-platform fact-checking, and forensic skills.”

So they are still working out the details, but want us to rest assured they have a plan. Or an outline. Or maybe a vague notion. At least CBS acknowledges this is a problem. Now what about all the other news outlets?

Cynthia Murrell, November 21, 2023

OpenAI: What about Uncertainty and Google DeepMind?

November 20, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

A large number of write ups about Microsoft and its response to the OpenAI management move populate my inbox this morning (Monday, November 20, 2023).

To give you a sense of the number of poohbahs, mavens, and “real” journalists covering Microsoft’s hiring of Sam (AI-Man) Altman, I offer this screen shot of Techmeme.com taken at 1100 am US Eastern time:

image

A single screenshot cannot do justice to  the digital bloviating on this subject as well as related matters.

I did a quick scan because I simply don’t have the time at age 79 to read every item in this single headline service. Therefore, I admit that others may have thought about the impact of the Steve Jobs’s like termination, the revolt of some AI wizards, and Microsoft’s creating a new “company” and hiring Sam AI-Man and a pride of his cohorts in the span of 72 hours (give or take time for biobreaks).

In this short essay, I want to hypothesize about how the news has been received by that merry band of online advertising professionals.

To begin, I want to suggest that the turmoil about who is on first at OpenAI sent a low voltage signal through the collective body of the Google. Frisson resulted. Uncertainty and opportunity appeared together like the beloved Scylla and Charybdis, the old pals of Ulysses. The Google found its right and left Brainiac hemispheres considering that OpenAI would experience a grave set back, thus clearing a path for Googzilla alone. Then one of the Brainiac hemisphere reconsidered and perceive a grave threat from the split. In short, the Google tipped into its zone of uncertainty.

image

A group of online advertising experts meet to consider the news that Microsoft has hired Sam Altman. The group looks unhappy. Uncertainty is an unpleasant factor in some business decisions. Thanks Microsoft Copilot, you captured the spirit of how some Silicon Valley wizards are reacting to the OpenAI turmoil because Microsoft used the OpenAI termination of Sam Altman as a way to gain the upper hand in the cloud and enterprise app AI sector.

Then the matter appeared to shift back to the pre-termination announcement. The co-founder of OpenAI gained more information about the number of OpenAI employees who were planning to quit or, even worse, start posting on Instagram, WhatsApp, and TikTok (X.com is no longer considered the go-to place by the in crowd.

The most interesting development was not that Sam AI-Man would return to the welcoming arms of Open AI. No, Sam AI-Man and another senior executive were going to hook up with the geniuses of Redmond. A new company would be formed with Sam AI-Man in charge.

As these actions unfolded, the Googlers sank under a heavy cloud of uncertainty. What if the Softies could use Google’s own open source methods, integrate rumored Microsoft-developed AI capabilities, and make good on Sam AI-Man’s vision of an AI application store?

The Googlers found themselves reading every “real news” item about the trajectory of Sam AI-Man and Microsoft’s new AI unit. The uncertainty has morphed into another January 2023 Davos moment. Here’s my take as of 230 pm US Eastern, November 20, 2023:

  1. The Google faces a significant threat when it comes to enterprise AI apps. Microsoft has a lock on law firms, the government, and a number of industry sectors. Google has a presence, but when it comes to go-to apps, Microsoft is the Big Dog. More and better AI raises the specter of Microsoft putting an effective laser defense behinds its existing enterprise moat.
  2. Microsoft can push its AI functionality as the Azure difference. Furthermore, whether Google or Amazon for that matter assert their cloud AI is better, Microsoft can argue, “We’re better because we have Sam AI-Man.” That is a compelling argument for government and enterprise customers who cannot imagine work without Excel and PowerPoint. Put more AI in those apps, and existing customers will resist blandishments from other cloud providers.
  3. Google now faces an interesting problem: It’s own open source code could be converted into a death ray, enhanced by Sam AI-Man, and directed at the Google. The irony of Googzilla having its left claw vaporized by its own technology is going to be more painful than Satya Nadella rolling out another Davos “we’re doing AI” announcement.

Net net: The OpenAI machinations are interesting to many companies. To the Google, the OpenAI event and the Microsoft response is like an unsuspecting person getting zapped by Nikola Tesla’s coil. Google’s mastery of high school science club management techniques will now dig into the heart of its DeepMind.

Stephen E Arnold, November 20, 2023

Google Pulls Out a Rhetorical Method to Try to Win the AI Spoils

November 20, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

In high school in 1958, our debate team coach yapped about “framing.” The idea was new to me, and Kenneth Camp pounded it into our debate’s collective “head” for the four years of my high school tenure. Not surprisingly, when I read “Google DeepMind Wants to Define What Counts As Artificial General Intelligence” I jumped back in time 65 years (!) to Mr. Camp’s explanation of framing and how one can control the course of a debate with the technique.

Google should not have to use a rhetorical trick to make its case as the quantum wizard of online advertising and universal greatness. With its search and retrieval system, the company can boost, shape, and refine any message it wants. If those methods fall short, the company can slap on a “filter” or “change its rules” and deprecate certain Web sites and their messages.

But Google values academia, even if the university is one that welcomed a certain Jeffrey Epstein into its fold. (Do you remember the remarkable Jeffrey Epstein. Some of those who he touched do I believe.) The estimable Google is the subject of referenced article in the MIT-linked Technology Review.

From my point of view, the big idea is the write up is, and I quote:

To come up with the new definition, the Google DeepMind team started with prominent existing definitions of AGI and drew out what they believe to be their essential common features. The team also outlines five ascending levels of AGI: emerging (which in their view includes cutting-edge chatbots like ChatGPT and Bard), competent, expert, virtuoso, and superhuman (performing a wide range of tasks better than all humans, including tasks humans cannot do at all, such as decoding other people’s thoughts, predicting future events, and talking to animals). They note that no level beyond emerging AGI has been achieved.

Shades of high school debate practice and the chestnuts scattered about the rhetorical camp fire as John Schunk, Jimmy Bond, and a few others (including the young dinobaby me) learned how one can set up a frame, populate the frame with logic and facts supporting the frame, and then point out during rebuttal that our esteemed opponents were not able to dent our well formed argumentative frame.

Is Google the optimal source for a definition of artificial general intelligence, something which does not yet exist. Is Google’s definition more useful than a science fiction writer’s or a scene from a Hollywood film?

Even the trusted online source points out:

One question the researchers don’t address in their discussion of _what_ AGI is, is _why_ we should build it. Some computer scientists, such as Timnit Gebru, founder of the Distributed AI Research Institute, have argued that the whole endeavor is weird. In a talk in April on what she sees as the false (even dangerous) promise of utopia through AGI, Gebru noted that the hypothetical technology “sounds like an unscoped system with the apparent goal of trying to do everything for everyone under any environment.” Most engineering projects have well-scoped goals. The mission to build AGI does not. Even Google DeepMind’s definitions allow for AGI that is indefinitely broad and indefinitely smart. “Don’t attempt to build a god,” Gebru said.

I am certain it is an oversight, but the telling comment comes from an individual who may have spoken out about Google’s systems and methods for smart software.

image

Mr. Camp, the high school debate coach, explains how a rhetorical trope can gut even those brilliant debaters from other universities. (Yes, Dartmouth, I am still thinking of you.) Google must have had a “coach” skilled in the power of framing. The company is making a bold move to define that which does not yet exist and something whose functionality is unknown. Such is the expertise of the Google. Thanks, Bing. I find your use of people of color interesting. Is this a pre-Sam ouster or a post-Sam ouster function?

What do we learn from the write up? In my view of the AI landscape, we are given some insight into Google’s belief that its rhetorical trope packaged as content marketing within an academic-type publication will lend credence to the company’s push to generate more advertising revenue. You may ask, “But won’t Google make oodles of money from smart software?” I concede that it will. However, the big bucks for the Google come from those willing to pay for eyeballs. And that, dear reader, translates to advertising.

Stephen E Arnold, November 20, 2023

Sigh, More Doom and Gloom about Smart Software

November 20, 2023

green-dino_thumb_thumbThis essay is the work of a dumb humanoid. No smart software required.

Hey, the automatic popcorn function works in your microwave, right? That’s a form of smart software. But that’s okay. The embedding of AI in a personnel review is less benign. Letting AI develop a bio-weapon system is definitely bad. Sci fi or real life?

image

An AI researcher explains to her colleagues that smart software will destroy their careers, ruin their children’s lives, and destroy the known universe. The colleagues are terrified except for those consulting for firms engaged in the development and productization of AI products and services. Thanks, Microsoft Bing. You have the hair of a scientist figured out.

I read “AI Should Be Better Understood and Managed — New Research Warns.” The main idea is, according to the handy dandy summary which may have been generated by an AI system recounts the journey to this “warning”:

Artificial Intelligence (AI) and algorithms can and are being used to radicalize, polarize, and spread racism and political instability, says an academic. An expert argues that AI and algorithms are not just tools deployed by national security agencies to prevent malicious activity online, but can be contributors to polarization, radicalism and political violence — posing a threat to national security.

Who knew? I wonder if the developers in China, Iran, North Korea, and any other members of the “axis of evil” are paying attention to the threats of smart software, developing with and using the technologies, and following along with the warnings of Western Europeans? My hunch is that the answer is, “Are you kidding?”

I noted this statement in the paper:

“This lack of trust in machines, the fears associated with them, and their association with biological, nuclear and genetic threats to humankind has contributed to a desire on the part of governments and national security agencies to influence the development of the technology, to mitigate risk and (in some cases) to harness its positive potentiality,” writes Professor Burton.

I assume the author includes “all” of earth’s governments. Now that strikes me as a somewhat challenging task. Exactly what coordinating group will undertake the job? A group of academics in the EU? Some whiz kids at Google or OpenAI? How about the US Congress?

Yeah.

Gentle reader, the AI cat is out of the bag, and I am growing less responsive to the fear mongering.

Stephen E Arnold, November 20, 2023

OpenAI: Permanent CEO Needed

November 17, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

My rather lame newsreader spit out an “urgent alert” for me. Like the old teletype terminal: Ding, ding, ding, and a bunch of asterisks.

Surprise. Sam AI-Man allegedly has been given the opportunity to find his future elsewhere. Let me translate blue chip consultant speak for you. The “find your future elsewhere” phrase means you have been fired, RIFed, terminated with extreme prejudice, or “there’s the door. Use it now.” The particularly connotative spin depends on the person issuing the formal statement.

image

“Keep in mind that we will call you,” says the senior member of the Board of Directors. The head of the human resources committee says, “Remember. We don’t provide a reference. Why not try the Google AI system?” Thank you, MSFT Copilot. You must have been trained on content about Mr. Ballmer’s departure.

OpenAI Fires Co-Founder and CEO Sam Altman for Lying to Company Board” states as rock solid basaltic truth:

OpenAI CEO and co-founder Sam Altman was fired for lying to the board of his company.

The good news is that a succession option, of sorts, is in place. Accordingly, OpenAI’s chief technical officer, has become the “interim CEO.” I like the “interim.” That’s solid.

For the moment, let’s assume the RIF statement is true. Furthermore, on this rainy Saturday in rural Kentucky, I shall speculate about the reasons for this announcement. Here we go:

  1. The problem is money, the lack thereof, or the impossibility of controlling the costs of the OpenAI system. Perhaps Sam AI-Man said, “Money is no problem.” The Board did not agree. Money is the problem.
  2. The lovey dovey relationship with the Microsofties has hit a rough patch. MSFT’s noises have been faint and now may become louder about AI chips, options, and innovations. Will these Microsoft bleats become more shrill as the ageing giant feels pain as it tries to make marketing hyperbole a reality. Let’s ask the Copilot, shall we?
  3. The Board has realized that the hyperbole has exceeded OpenAI’s technical ability to solve such problems as made up data (hallucinations), the resources to cope with the the looming legal storm clouds related to unlicensed use of some content (the Copyright Shield “promise”), fixing up the baked in bias of the system, and / or OpenAI ChatGPT’s vulnerability to nifty prompt engineering to override alleged “guardrails”.

What’s next?

My answer is, “Uncertainty.” Cue the Ray Charles’ hit with the lyric “Hit the road, Jack. Don’t you come back no more, no more, no more, no more. (I did not steal this song; I found it via Google on the Google YouTube. Honest.) I admit I did hear the tune playing in my head when I read the Guardian story.

Stephen E Arnold, November 17, 2023

x

x

x

x

x

x

Adobe: Delivers Real Fake War Images

November 17, 2023

green-dino_thumb_thumbThis essay is the work of a dumb humanoid. No smart software required.

Gee, why are we not surprised? Crikey. reveals, “Adobe Is Selling Fake AI Images of the War in Israel-Gaza.” While Adobe did not set out to perpetuate fake news about the war, neither it did not try very hard to prevent it. Reporter Cam Wilson writes:

“As part of the company’s embrace of generative artificial intelligence (AI), Adobe allows people to upload and sell AI images as part of its stock image subscription service, Adobe Stock. Adobe requires submitters to disclose whether they were generated with AI and clearly marks the image within its platform as ‘generated with AI’. Beyond this requirement, the guidelines for submission are the same as any other image, including prohibiting illegal or infringing content. People searching Adobe Stock are shown a blend of real and AI-generated images. Like ‘real’ stock images, some are clearly staged, whereas others can seem like authentic, unstaged photography. This is true of Adobe Stock’s collection of images for searches relating to Israel, Palestine, Gaza and Hamas. For example, the first image shown when searching for Palestine is a photorealistic image of a missile attack on a cityscape titled ‘Conflict between Israel and Palestine generative AI’. Other images show protests, on-the-ground conflict and even children running away from bomb blasts — all of which aren’t real.”

Yet these images are circulating online, adding to the existing swirl of misinformation. Even several small news outlets have used them with no disclaimers attached. They might not even realize the pictures are fake.

Or perhaps they do. Wilson consulted RMIT’s T.J. Thomson, who has been researching the use of AI-generated images. He reports that, while newsrooms are concerned about misinformation, they are sorely tempted by the cost-savings of using generative AI instead of on-the-ground photographers. One supposes photographer safety might also be a concern. Is there any stuffing this cat into the bag, or must we resign ourselves to distrusting any images we see online?

A loss suffered in the war is real. Need an image of this?

Cynthia Murrell, November 17, 2023

AI Is a Rainmaker for Bad Actors

November 16, 2023

green-dino_thumbThis essay is the work of a dumb dinobaby. No smart software required.

How has smart software, readily available as open source code and low-cost online services, affected cyber crime? Please, select from one of the following answers. No cheating allowed.

[a] Bad actors love smart software.

[b] Criminals are exploiting smart orchestration and business process tools to automate phishing.

[c] Online fraudsters have found that launching repeated breaching attempts is faster and easier when AI is used to adapt to server responses.

[d] Finding mules for drug and human trafficking is easier than ever because social media requests for interested parties can be cranked out at high speed 24×7.

image_thumb

“Well, Slim, your idea to use that new fangled smart software to steal financial data is working. Sittin’ here counting the money raining down on us is a heck of a lot easier than robbing old ladies in the Trader Joe’s parking lot,” says the bad actor with the coffin nail of death in his mouth and the ill-gotten gains in his hands. Thanks, Copilot, you are producing nice cartoons today.

And the correct answer is … a, b, c, and d.

For some supporting information, navigate to “Deepfake Fraud Attempts Are Up 3000% in 2023. Here’s Why.” The write up reports:

Face-swapping apps are the most common example. The most basic versions crudely paste one face on top of another to create a “cheapfake.” More sophisticated systems use AI to morph and blend a source face onto a target, but these require greater resources and skills.  The simple software, meanwhile, is easy to run and cheap or even free. An array of forgeries can then be simultaneously used in multiple attacks.

I like the phrase “cheap fakes.”

Several observations:

  1. Bad actors, unencumbered by bureaucracy, can download, test, tune, and deploy smart criminal actions more quickly than law enforcement can thwart them
  2. Existing cyber security systems are vulnerable to some smart attacks because AI can adapt and try different avenues
  3. Large volumes of automated content can be created and emailed without the hassle of manual content creation
  4. Cyber security vendors operate in “react mode”; that is, once a problem is discovered then the good actors will develop a defense. The advantage goes to those with a good offense, not a good defense.

Net net: 2024 will be fraught with security issues.

Stephen E Arnold, November 17, 2023

Using Smart Software to Make Google Search Less Awful

November 16, 2023

green-dino_thumb_thumbThis essay is the work of a dumb humanoid. No smart software required.

Here’s a quick tip: to get useful results from Google Search, use a competitor’s software. Digital Digging blogger Henk van Ess describes “How to Teach ChatGPT to Come Up with Google Formulas.” Specifically, Ess needed to include foreign-language results in his queries while narrowing results to certain time frames. These are not parameters Google handles well on its own. It was Chat GPT to the rescue—after some tinkering, anyway. He describes an example search goal:

“Find any official document about carbon dioxide reduction from Greek companies, anything from March 24, 2020 to December 21, 2020 will do. Hey, can you search that in Greek, please? Tough question right? Time to fire up Bing or ChatGPT. Round 1 in #chatgpt has a terrible outcome.”

But of course, Hess did not stop there. For the technical details on the resulting “ball of yarn,” how Hess resolved it, and how it can be extrapolated to other use cases, navigate to the write-up. One must bother to learn how to write effective prompts to get these results, but Hess insists it is worth the effort. The post observes:

“The good news is: you only have to do it once for each of your favorite queries. Set and forget, as you just saw I used the same formulae for Greek CO2 and Japanese EV’s. The advantage of natural language processing tools like ChatGPT is that they can help you generate more accurate and relevant search queries in a faster and more efficient way than manually typing in long and complex queries into search engines like Google. By using natural language processing tools to refine and optimize your search queries, you can avoid falling into ‘rabbit holes’ of irrelevant or inaccurate results and get the information you need more quickly and easily.”

Google is currently rolling out its own AI search “experience” in phases around the world. Will it improve results, or will one still be better off employing third-party hacks?

Cynthia Murrell, November 16, 2023

Hitting the Center Field Wall, AI Suffers an Injury!

November 15, 2023

green-dino_thumb_thumbThis essay is the work of a dumb, dinobaby humanoid. No smart software required.

At a reception at a government facility in Washington, DC, last week, one of the bright young sparks told me, “Every investment deal I see gets fund if it includes the words ‘artificial intelligence.’” I smiled and moved to another conversation. Wow, AI has infused the exciting world of a city built on the swampy marge of the Potomac River.

I think that the go-go era of smart software has reached a turning point. Venture firms and consultants may not have received the email with this news. However, my research team has, and the update contains information on two separate thrusts of the AI revolution.

image

The heroic athlete, supported by his publicist, makes a heroic effort to catch the long fly ball. Unfortunately our star runs into the wall, drops the ball, and suffers what may be a career-ending injury to his left hand. (It looks broken, doesn’t it?)Oh, well. Thanks, MSFT Bing. The perspective is weird and there is trash on the ground, but the image is good enough.

The first signal appears in “AI Companies Are Running Out of Training Data.” The notion that online information is infinite is a quaint one. But in the fever of moving to online, reality is less interesting that the euphoria of the next gold rush or the new Industrial Revolution. Futurism reports:

Data plays a central role, if not the central role, in the AI economy. Data is a model’s vital force, both in basic function and in quality; the more natural — as in, human-made — data that an AI system has to train on, the better that system becomes. Unfortunately for AI companies, though, it turns out that natural data is a finite resource — and if that tap runs dry, researchers warn they could be in for a serious reckoning.

The information or data in question is not the smog emitted by modern automobiles’ chip stuffed boxes. Nor is the data the streams of geographic information gathered by mobile phone systems. The high value data are those which matter; for example, in a stream of security information, which specific stock is moving because it is being manipulated by one of those bright young minds I met at the DC event.

The article “AI Companies Are Running Out of Training Data” adds:

But as data becomes increasingly valuable, it’ll certainly be interesting to see how many AI companies can actually compete for datasets — let alone how many institutions, or even individuals, will be willing to cough their data over to AI vacuums in the first place. But even then, there’s no guarantee that the data wells won’t ever run dry. As infinite as the internet seems, few things are actually endless.

The fix is synthetic or faked data; that is, fabricated data which appears to replicate real-life behavior. (Don’t you love it when Google predicts the weather or a smarty pants games the crypto market?)

The message is simple: Smart software has ground through the good stuff and may face its version of an existential crisis. That’s different from the rah rah one usually hears about AI.

The second item my team called to my attention appears in a news story called “OpenAI Pauses New ChatGPT Plus Subscriptions De to Surge in Demand.” I read the headline as saying, “Oh, my goodness, we don’t have the money or the capacity to handle more users requests.”

The article expresses the idea in this snappy 21st century way:

The decision to pause new ChatGPT signups follows a week where OpenAI services – including ChatGPT and the API – experienced a series of outages related to high-demand and DDoS attacks.

Okay, security and capacity.

What are the implications of these two unrelated stories:

  1. The run up to AI has been boosted with system operators ignoring copyright and picking low hanging fruit. The orchard is now looking thin. Apples grow on trees, just not quickly and over cultivation can ruin the once fertile soil. Think a digital Dust Bowl perhaps?
  2. The friction of servicing user requests is causing slow downs. Can the heat be dissipated? Absolutely but the fix requires money, more than high school science club management techniques, and common sense. Do AI companies exhibit common sense? Yeah, sure. Everyday.
  3. The lack of high-value or sort of good information is a bummer. Machines producing insights into the dark activities of bad actors and the thoughts of 12-year-olds are grinding along. However, the value of the information outputs seems to be lagging behind the marketers’ promises. One telling example is the outright failure of Israel’s smart software to have utility in identifying the intent of bad actors. My goodness, if any country has smart systems, it’s Israel. Based on events in the last couple of months, the flows of data produced what appears to be a failing grade.

If we take these two cited articles’ information at face value, one can make a case that the great AI revolution may be facing some headwinds. In a winner-take-all game like AI, there will be some Sad Sacks at those fancy Washington, DC receptions. Time to innovate and renovate perhaps?

Stephen E Arnold, November 15, 2023

Google Solves Fake Information with the Tom Sawyer Method

November 14, 2023

green-dino_thumb_thumbThis essay is the work of a dumb humanoid. No smart software required.

How does one deliver “responsible AI”? Easy. Shift the work to those who use a system built on smart software. I call the approach the “Tom Sawyer Method.” The idea is that the fictional character (Tom) convinced lesser lights to paint the fence for him. Sammy Clemmons (the guy who invested in the typewriter) said:

“Work consists of whatever a body is obliged to do. Play consists of whatever a body is not obliged to do.”

Thus the information in “Our Approach to Responsible AI Innovation” is play. The work is for those who cooperate to do the real work. The moral is, “We learn more about Google than we do about responsible AI innovation.”

image

The young entrepreneur says, “You fellows chop the wood.  I will go and sell it to one of the neighbors. Do a good job. Once you finish you can deliver the wood and I will give you your share of the money. How’s that sound?” The friends are eager to assist their pal. Thanks Microsoft Bing. I was surprised that you provided people of color when I asked for “young people chopping wood.” Interesting? I think so.

The Google write up from a trio of wizard vice presidents at the online advertising company says:

…we’ll require creators to disclose when they’ve created altered or synthetic content that is realistic, including using AI tools. When creators upload content, we will have new options for them to select to indicate that it contains realistic altered or synthetic material.

Yep, “require.” But what I want to do is to translate Google speak into something dinobabies understand. Here’s my translation:

  1. Google cannot determine what content is synthetic and what is not; therefore, the person using our smart software has to tell us, “Hey, Google, this is fake.”
  2. Google does not want to increase headcount and costs related to synthetic content detection and removal. Therefore, the work is moved via the Tom Sawyer Method to YouTube “creators” or fence painters. Google gets the benefit of reduced costs, hopefully reduced liability, and “play” like Foosball.
  3. Google can look at user provided metadata and possibly other data in the firm’s modest repository and determine with acceptable probability that a content object and a creator should be removed, penalized, or otherwise punished by a suitable action; for example, not allowing a violator to buy Google merchandise. (Buying Google AdWords is okay, however.)

The write up concludes with this bold statement: “The AI transformation is at our doorstep.” Inspiring. Now wood choppers, you can carry the firewood into the den and stack it buy the fireplace in which we burn the commission checks the offenders were to receive prior to their violating the “requirements.”

Ah, Google, such a brilliant source of management inspiration: A novel written in 1876. I did not know that such old information was in the Google index. I mean DejaVu is consigned to the dust bin. Why not Mark Twain’s writings?

Stephen  E Arnold, November 14, 2023

test

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta