Google Pulls Out a Rhetorical Method to Try to Win the AI Spoils

November 20, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

In high school in 1958, our debate team coach yapped about “framing.” The idea was new to me, and Kenneth Camp pounded it into our debate’s collective “head” for the four years of my high school tenure. Not surprisingly, when I read “Google DeepMind Wants to Define What Counts As Artificial General Intelligence” I jumped back in time 65 years (!) to Mr. Camp’s explanation of framing and how one can control the course of a debate with the technique.

Google should not have to use a rhetorical trick to make its case as the quantum wizard of online advertising and universal greatness. With its search and retrieval system, the company can boost, shape, and refine any message it wants. If those methods fall short, the company can slap on a “filter” or “change its rules” and deprecate certain Web sites and their messages.

But Google values academia, even if the university is one that welcomed a certain Jeffrey Epstein into its fold. (Do you remember the remarkable Jeffrey Epstein. Some of those who he touched do I believe.) The estimable Google is the subject of referenced article in the MIT-linked Technology Review.

From my point of view, the big idea is the write up is, and I quote:

To come up with the new definition, the Google DeepMind team started with prominent existing definitions of AGI and drew out what they believe to be their essential common features. The team also outlines five ascending levels of AGI: emerging (which in their view includes cutting-edge chatbots like ChatGPT and Bard), competent, expert, virtuoso, and superhuman (performing a wide range of tasks better than all humans, including tasks humans cannot do at all, such as decoding other people’s thoughts, predicting future events, and talking to animals). They note that no level beyond emerging AGI has been achieved.

Shades of high school debate practice and the chestnuts scattered about the rhetorical camp fire as John Schunk, Jimmy Bond, and a few others (including the young dinobaby me) learned how one can set up a frame, populate the frame with logic and facts supporting the frame, and then point out during rebuttal that our esteemed opponents were not able to dent our well formed argumentative frame.

Is Google the optimal source for a definition of artificial general intelligence, something which does not yet exist. Is Google’s definition more useful than a science fiction writer’s or a scene from a Hollywood film?

Even the trusted online source points out:

One question the researchers don’t address in their discussion of _what_ AGI is, is _why_ we should build it. Some computer scientists, such as Timnit Gebru, founder of the Distributed AI Research Institute, have argued that the whole endeavor is weird. In a talk in April on what she sees as the false (even dangerous) promise of utopia through AGI, Gebru noted that the hypothetical technology “sounds like an unscoped system with the apparent goal of trying to do everything for everyone under any environment.” Most engineering projects have well-scoped goals. The mission to build AGI does not. Even Google DeepMind’s definitions allow for AGI that is indefinitely broad and indefinitely smart. “Don’t attempt to build a god,” Gebru said.

I am certain it is an oversight, but the telling comment comes from an individual who may have spoken out about Google’s systems and methods for smart software.

image

Mr. Camp, the high school debate coach, explains how a rhetorical trope can gut even those brilliant debaters from other universities. (Yes, Dartmouth, I am still thinking of you.) Google must have had a “coach” skilled in the power of framing. The company is making a bold move to define that which does not yet exist and something whose functionality is unknown. Such is the expertise of the Google. Thanks, Bing. I find your use of people of color interesting. Is this a pre-Sam ouster or a post-Sam ouster function?

What do we learn from the write up? In my view of the AI landscape, we are given some insight into Google’s belief that its rhetorical trope packaged as content marketing within an academic-type publication will lend credence to the company’s push to generate more advertising revenue. You may ask, “But won’t Google make oodles of money from smart software?” I concede that it will. However, the big bucks for the Google come from those willing to pay for eyeballs. And that, dear reader, translates to advertising.

Stephen E Arnold, November 20, 2023

Adobe: Delivers Real Fake War Images

November 17, 2023

green-dino_thumb_thumbThis essay is the work of a dumb humanoid. No smart software required.

Gee, why are we not surprised? Crikey. reveals, “Adobe Is Selling Fake AI Images of the War in Israel-Gaza.” While Adobe did not set out to perpetuate fake news about the war, neither it did not try very hard to prevent it. Reporter Cam Wilson writes:

“As part of the company’s embrace of generative artificial intelligence (AI), Adobe allows people to upload and sell AI images as part of its stock image subscription service, Adobe Stock. Adobe requires submitters to disclose whether they were generated with AI and clearly marks the image within its platform as ‘generated with AI’. Beyond this requirement, the guidelines for submission are the same as any other image, including prohibiting illegal or infringing content. People searching Adobe Stock are shown a blend of real and AI-generated images. Like ‘real’ stock images, some are clearly staged, whereas others can seem like authentic, unstaged photography. This is true of Adobe Stock’s collection of images for searches relating to Israel, Palestine, Gaza and Hamas. For example, the first image shown when searching for Palestine is a photorealistic image of a missile attack on a cityscape titled ‘Conflict between Israel and Palestine generative AI’. Other images show protests, on-the-ground conflict and even children running away from bomb blasts — all of which aren’t real.”

Yet these images are circulating online, adding to the existing swirl of misinformation. Even several small news outlets have used them with no disclaimers attached. They might not even realize the pictures are fake.

Or perhaps they do. Wilson consulted RMIT’s T.J. Thomson, who has been researching the use of AI-generated images. He reports that, while newsrooms are concerned about misinformation, they are sorely tempted by the cost-savings of using generative AI instead of on-the-ground photographers. One supposes photographer safety might also be a concern. Is there any stuffing this cat into the bag, or must we resign ourselves to distrusting any images we see online?

A loss suffered in the war is real. Need an image of this?

Cynthia Murrell, November 17, 2023

Buy Google Traffic: Nah, Paying May Not Work

November 16, 2023

green-dino_thumb_thumbThis essay is the work of a dumb humanoid. No smart software required.

Tucked into a write up about the less than public trial of the Google was an interesting factoid. The source of the item was “More from the US v Google Trial: Vertical Search, Pre-Installs and the Case of Firefox / Yahoo.” Here’s the snippet:

Expedia execs also testified about the cost of ads and how increases had no impact on search results. On October 19, Expedia’s former chief operating officer, Jeff Hurst, told the court the company’s ad fees increased tenfold from $21 million in 2015 to $290 million in 2019. And yet, Expedia’s traffic from Google did not increase. The implication was that this was due to direct competition from Google itself. Hurst pointed out that Google began sharing its own flight and hotel data in search results in that period, according to the Seattle Times.

image

“Yes, sir, you can buy a ticket and enjoy a ticket to our entertainment,” says the theater owner. The customer asks, “Is the theater in good repair?” The ticket seller replies, “Of course, you get your money’s worth at our establishment. Next.” Thanks, Microsoft Bing. It took several tries before I gave up.

I am a dinobaby, and I am, by definition, hopelessly out of it. However, I interpret this passage in this way:

  1. Despite protestations about the Google algorithm’s objectivity, Google has knobs and dials it can use to cause the “objective” algorithm to be just a teenie weenie less objective. Is this a surprise? Not to me. Who builds a system without a mechanism for controlling what it does. My favorite example of this steering involves the original FirstGov.gov search system circa 2000. After Mr. Clinton lost the election, the new administration, a former Halliburton executive wanted a certain Web page result to appear when certain terms were searched. No problemo. Why? Who builds a system one cannot control? Not me. My hunch is that Google may have a similar affection for knobs and dials.
  2. Expedia learned that buying advertising from a competitor (Google) was expensive and then got more expensive. The jump from $21 million to $290 million is modest from the point of view of some technology feudalists. To others the increase is stunning.
  3. Paying more money did not result in an increase in clicks or traffic. Again I was not surprised. What caught my attention is that it has taken decades for others to figure out how the digital highway men came riding like a wolf on the fold. Instead of being bedecked with silver and gold, these actors wore those cheerful kindergarten colors. Oh, those colors are childish but those wearing them carried away the silver and gold it seems.

Net net: Why is this US v Google trial not more public? Why so many documents withheld? Why is redaction the best billing tactic of 2023? So many questions that this dinobaby cannot answer. I want to go for a ride in the Brin-A-Loon too. I am a simple dinobaby.

Stephen E Arnold, November 16, 2023

An Odd Couple Sharing a Soda at a Holiday Data Lake

November 16, 2023

What happens when love strikes the senior managers of the technology feudal lords? I will tell you what happens — Love happens. The proof appears in “Microsoft and Google Join Forces on OneTable, an Open-Source Solution for Data Lake Challenges.” Yes, the lakes around Redmond can be a challenge. For those living near Googzilla’s stomping grounds, the risk is that a rising sea level will nuke the outdoor recreation areas and flood the parking lots.

But any speed dating between two techno feudalists is news. The “real news” outfit Venture Beat reports:

In a new open-source partnership development effort announced today, Microsoft is joining with Google and Onehouse in supporting the OneTable project, which could reshape the cloud data lake landscape for years to come

And what does “reshape” mean to these outfits? Probably nothing more than making sure that Googzilla and Mothra become the suppliers to those who want to vacation at the data lake. Come to think of it. The concessions might be attractive as well.

image

Googzilla says to Mothra-Soft, a beast living in Mercer Island, “I know you live on the lake. It’s a swell nesting place. I think we should hook up and cooperate. We can share the money from merged data transfers the way you and I —  you good looking Lepidoptera — are sharing this malted milk. Let’s do more together if you know what I mean.” The delightful Mothra-Soft croons, “I thought you would wait until our high school reunion to ask, big boy. Let’s find a nice, moist, uncrowded place to consummate our open source deal, handsome.” Thanks, Microsoft Bing. You did a great job of depicting a senior manager from the company that developed Bob, the revolutionary interface.

The article continues:

The ability to enable interoperability across formats is critical for Google as it expands the availability of its BigQuery Omni data analytics technology. Kazmaier said that Omni basically extends BigQuery to AWS and Microsoft Azure and it’s a service that has been growing rapidly. As organizations look to do data processing and analytics across clouds there can be different formats and a frequent question that is asked is how can the data landscape be interconnected and how can potential fragmentation be stopped.

Is this alleged linkage important? Yeah, it is. Data lakes are great places to part AI training data. Imagine the intelligence one can glean monitoring inflows and outflows of bits. To make the idea more interesting think in terms of the metadata. Exciting because open source software is really for the little guys too.

Stephen E Arnold, November 16, 2023

SolarWinds: Huffing and Puffing in a Hot Wind on a Sunny Day

November 16, 2023

green-dino_thumb_thumbThis essay is the work of a dumb humanoid. No smart software required.

Remember the SolarWinds’ misstep? Time has a way deleting memories of security kerfuffles. Who wants to recall ransomware, loss of data, and the general embarrassment of getting publicity for the failure of existing security systems? Not too many. A few victims let off steam by blaming their cyber vendors. Others — well, one — relieve their frustrations by emulating a crazed pit bull chasing an M1 A2 battle tank. The pit bull learns that the M1 A2 is not going to stop and wait for the pit bull to stop barking and snarling. The tank grinds forward, possibly over Solar (an unlikely name for a pit bull in my opinion).

11 11 political speech

The slick business professional speaks to a group of government workers gathered outside on the sidewalk of 100 F Street NW. The talker is semi-shouting, “Your agency is incompetent. You are unqualified. My company knows how to manage our business, security, and personnel affairs.” I am confident this positive talk will win the hearts and minds of the GS-13s listening. Thanks, Microsoft Bing. You obviously have some experience with government behaviors.

I read “SolarWinds Says SEC Sucks: Watchdog Lacks Competence to Regulate Cybersecurity.” The headline attributes the statement to a company. My hunch is that the criticism of the SEC is likely someone other than the firm’s legal counsel, the firm’s CFO, or its PR team.

The main idea, of course, is that SolarWinds should not be sued by the US Securities & Exchange Commission. The SEC does have special agents, but no criminal authority. However, like many US government agencies and their Offices of Inspector General, the investigators can make life interesting for those in whom the US government agency has an interest. (Tip: I will now offer an insider tip. Avoid getting crossways with a US government agency. The people may change but the “desks” persist through time along with documentation of actions. The business processes in the US government mean that people and organizations of interest can be the subject to scrutiny. Like the poem says, “Time cannot wither nor custom spoil the investigators’ persistence.”)

The write up presents information obtained from a public blog post by the victim of a cyber incident. I call the incident a misstep because I am not sure how many organizations, software systems, people, and data elements were negatively whacked by the bad actors. In general, the idea is that a bad actor should not be able to compromise commercial outfits.

The write up reports:

SolarWinds has come out guns blazing to defend itself following the US Securities and Exchange Commission’s announcement that it will be suing both the IT software maker and its CISO over the 2020 SUNBURST cyberattack.

The vendor said the SEC’s lawsuit is "fundamentally flawed," both from a legal and factual perspective, and that it will be defending the charges "vigorously." A lengthy blog post, published on Wednesday, dissected some of the SEC’s allegations, which it evidently believes to be false. The first of which was that SolarWinds lacked adequate security controls before the SUNBURST attack took place.

The right to criticize is baked into the ethos of the US of A. The cited article includes this quote from the SolarWinds’ statement about the US Securities & Exchange Commission:

It later went on to accuse the regulator of overreaching and "twisting the facts" in a bid to expand its regulatory footprint, as well as claiming the body "lacks the authority or competence to regulate public companies’ cybersecurity. The SEC’s cybersecurity-related capabilities were again questioned when SolarWinds addressed the allegations that it didn’t follow the NIST Cybersecurity Framework (CSF) at the time of the attack.

SolarWinds feels strongly about the SEC and its expertise. I have several observations to offer:

  1. Annoying regulators and investigators is not perceived in some government agencies as a smooth move
  2. SolarWinds may find that its strong words may be recast in the form of questions in the legal forum which appears to be roaring down the rails
  3. The SolarWinds’ cyber security professionals on staff and the cyber security vendors whose super duper bad actor stoppers appear to have an opportunity to explain their view of what I call a “misstep.”

Do I have an opinion? Sure. You have read it in my blog posts or heard me say it in my law enforcement lectures, most recently at the Massachusetts / New York Association of Crime Analysts’ meeting in Boston the first week of October 2023.

Cyber security is easier to describe in marketing collateral than do in real life. The SolarWinds’ misstep is an interesting case example of reality being different from the expectation.

Stephen E Arnold, November 16, 2023

Google Solves Fake Information with the Tom Sawyer Method

November 14, 2023

green-dino_thumb_thumbThis essay is the work of a dumb humanoid. No smart software required.

How does one deliver “responsible AI”? Easy. Shift the work to those who use a system built on smart software. I call the approach the “Tom Sawyer Method.” The idea is that the fictional character (Tom) convinced lesser lights to paint the fence for him. Sammy Clemmons (the guy who invested in the typewriter) said:

“Work consists of whatever a body is obliged to do. Play consists of whatever a body is not obliged to do.”

Thus the information in “Our Approach to Responsible AI Innovation” is play. The work is for those who cooperate to do the real work. The moral is, “We learn more about Google than we do about responsible AI innovation.”

image

The young entrepreneur says, “You fellows chop the wood.  I will go and sell it to one of the neighbors. Do a good job. Once you finish you can deliver the wood and I will give you your share of the money. How’s that sound?” The friends are eager to assist their pal. Thanks Microsoft Bing. I was surprised that you provided people of color when I asked for “young people chopping wood.” Interesting? I think so.

The Google write up from a trio of wizard vice presidents at the online advertising company says:

…we’ll require creators to disclose when they’ve created altered or synthetic content that is realistic, including using AI tools. When creators upload content, we will have new options for them to select to indicate that it contains realistic altered or synthetic material.

Yep, “require.” But what I want to do is to translate Google speak into something dinobabies understand. Here’s my translation:

  1. Google cannot determine what content is synthetic and what is not; therefore, the person using our smart software has to tell us, “Hey, Google, this is fake.”
  2. Google does not want to increase headcount and costs related to synthetic content detection and removal. Therefore, the work is moved via the Tom Sawyer Method to YouTube “creators” or fence painters. Google gets the benefit of reduced costs, hopefully reduced liability, and “play” like Foosball.
  3. Google can look at user provided metadata and possibly other data in the firm’s modest repository and determine with acceptable probability that a content object and a creator should be removed, penalized, or otherwise punished by a suitable action; for example, not allowing a violator to buy Google merchandise. (Buying Google AdWords is okay, however.)

The write up concludes with this bold statement: “The AI transformation is at our doorstep.” Inspiring. Now wood choppers, you can carry the firewood into the den and stack it buy the fireplace in which we burn the commission checks the offenders were to receive prior to their violating the “requirements.”

Ah, Google, such a brilliant source of management inspiration: A novel written in 1876. I did not know that such old information was in the Google index. I mean DejaVu is consigned to the dust bin. Why not Mark Twain’s writings?

Stephen  E Arnold, November 14, 2023

test

Pundit Recounts Amazon Sins and Their Fixes

November 14, 2023

green-dino_thumb_thumbThis essay is the work of a dumb humanoid. No smart software required.

Sci-fi author and Pluralistic blogger Cory Doctorow is not a fan of Amazon. In fact, he declares, “Amazon Is a Ripoff.” His article references several sources to support this assertion, beginning with Lina Khan’s 2017 cautionary paper published in the Yale Law Journal. Now head of the FTC, Khan is bringing her expertise to bear in a lawsuit against the monopoly. We are reminded how tech companies have been able to get away with monopolistic practices thus far:

“There’s a cheat-code in US antitrust law, one that’s been increasingly used since the Reagan administration, when the ‘consumer welfare’ theory (‘monopolies are fine, so long as the lower prices’) shoved aside the long-established idea that antitrust law existed to prevent monopolies from forming at all. The idea that a company can do anything to create or perpetuate a monopoly so long as its prices go down and/or its quality goes up is directly to blame for the rise of Big Tech.”

But what, exactly, is shady about Amazon’s practices? From confusing consumers through complexity and gouging them with “drip pricing” to holding vendors over a barrel, Doctorow describes the company’s sins in this long, specific, and heavily linked diatribe. He then pulls three rules to hold Amazon accountable from a paper by researchers Tim O’Reilly, Ilan Strauss, and Mariana Mazzucato: Force the company to halt its most deceptive practices, mandate interoperability between it and comparison shopping sites, and create legal safe harbors for the scraping that underpins such interoperability. The invective concludes:

“I was struck by how much convergence there is among different kinds of practitioners, working against the digital sins of very different kinds of businesses. From the CFPB using mandates and privacy rules to fight bank rip-offs to behavioral economists thinking about Amazon’s manipulative search results. This kind of convergence is exciting as hell. After years of pretending that Big Tech was good for ‘consumers,’ we’ve not only woken up to how destructive these companies are, but we’re also all increasingly in accord about what to do about it. Hot damn!”

He sounds so optimistic. Are big changes ahead? Don’t forget to sign up for Prime.

Cynthia Murrell, November 14, 2023

A New Union or Just a Let’s Have Lunch Moment for Two Tech Giants

November 10, 2023

green-dino_thumb_thumbThis essay is the work of a dumb humanoid. No smart software required.

There is nothing like titans of technology and revenue generation discovering a common interest. The thrill is the consummation and reaping the subsequent rewards. “Meta Lets Amazon Shoppers Buy Products on Facebook and Instagram without Leaving the Apps” explains:

Meta doesn’t want you to leave its popular mobile apps when making that impulse Amazon purchase. The company debuted a new feature allowing users to link their Facebook and Instagram accounts to Amazon so they can buy goods by clicking on promotions in their feeds.

11 10 23 hugging bros

Two amped up, big time tech bros discover that each has something the other wants. What is that? An opportunity to extend and exploit perhaps? Thanks, Microsoft Bing, you do get the drift of my text prompt, don’t you?

The Zuckbook’s properties touch billions of people. Some of those people want to buy “stuff.” Legitimate stuff has required the user to click away and navigate to the online bookstore to purchase a copy of the complete works of Francis Bacon. Now, the Instagram user can buy without leaving the comforting arms of the Zuck.

Does anyone have a problem with that tie up? I don’t. It is definitely a benefit for the teen who must have the latest lip gloss. It is good for Amazon because the hope is that Zucksters will buy from the online bookstore. The Meta outfit probably benefits with some sort of inducement. Maybe it is just a hug from Amazon executives? Maybe it is an opportunity to mud wrestle with Mr. Bezos if he decides to get down and dirty to show his physical prowess?

Will US regulators care? Will EU regulators care? Will anyone care?

I am not sure how to answer these questions. For decades the high tech outfits have been able to emulate the captains of industry in the golden age without much cause for concern. Continuity is good.

Will teens buy copies of Novum Organum? Absolutely.

Stephen E Arnold, November 10, 2023

iPad and Zoom Learning: Not Working As Well As Expected

November 10, 2023

green-dino_thumb_thumbThis essay is the work of a dumb humanoid. No smart software required.

It seemed (to many) like the best option at the time. As COVID-19 shuttered brick-and-mortar schools, it was educational technology to the rescue around the world! Or at least that was the idea. In reality, kids with no tech, online access, informed guidance, or a nurturing environment were left behind. Who knew? UNESCO (the United Nations Educational, Scientific, and Cultural Organization) has put out a book that documents what went wrong, questions the dominant ed-tech narratives from the pandemic, and explores what we can do better going forward. The full text of "An Ed-Tech Tragedy?" can be read or downloaded for free here. The press release states:

"The COVID-19 pandemic pushed education from schools to educational technologies at a pace and scale with no historical precedent. For hundreds of millions of students formal learning became fully dependent on technology – whether internet-connected digital devices, televisions or radios. An Ed-Tech Tragedy? examines the numerous adverse and unintended consequences of the shift to ed-tech. It documents how technology-first solutions left a global majority of learners behind and details the many ways education was diminished even when technology was available and worked as intended. In unpacking what went wrong, the book extracts lessons and recommendations to ensure that technology facilitates, rather than subverts, efforts to ensure the universal provision of inclusive, equitable and human-centered public education."

The book is divided into four parts. Act 1 recalls the hopes and promises behind the push to move quarantined students online. Act 2 details the unintended consequences: The hundreds of millions of students without access to or knowledge of technology who were left behind. The widened disparity between privileged and underprivileged households in parental time and attention. The decreased engagement of students with subject matter. The environmental impact. The increased acceptance of in-home surveillance and breaches of privacy. And finally, the corporate stranglehold on education, which was dramatically strengthened and may now prove nigh impossible to dislodge.

Next an "Inter-Act" section questions what we were told about online learning during the pandemic and explores three options we could have pursued instead. The book concludes with a hopeful Act 3, a vision of how we might move forward with education technology in a more constructive and equitable manner. One thing remains to be seen: will we learn our lesson?

Cynthia Murrell, November 10, 2023

Smart Software: Some Issues Are Deal Breakers

November 10, 2023

green-dino_thumb_thumbThis essay is the work of a dumb humanoid. No smart software required.

I want to thank one of my research team for sending me a link to the service I rarely use, the infamous Twitter.com or now either X.com or Xitter.com.

The post is by an entity with a weird blue checkmark in a bumpy circle. The message or “post” does not have a title. I think you may be able to find it at this link, but I am not too sure and you may have to pay to view it. I am not sure about much when it comes to the X.com or Xitter.com service. Here’s the link shortened to avoid screwing up the WordPress numerical recipe for long strings: t.ly/QDx-O

10 8 mother and daughter with mobile

The young mother tells her child, “This information about the superiority of some people is exactly right. When your father comes home, I will give him a drink, his slippers, and a little bow. I want you to hug him.” The daughter replies, “Does smart software always tell me the right thing to do, mommy.” Thanks, MidJourney. Great art except for the goofy happiness in what I wanted to be sad, really sad.

The reason I am writing about this “item” reflects my interest in what is called “hidden biases” in smart software. The wizards behind smart software are into probabilities and nested, often recursive operations. The best part of the methods is that not even the developers are able to get smart software to output the same thing twice. Thus, outputs which are wonky can reflect upon:

  1. A developer coding error due to haste or dumbness
  2. Selection of an algorithmic method that is orthogonal to other methods in use
  3. Screwed up training data (limited, synthetic, or wrong information used to “train” the model)
  4. A decision by a senior developer to write a software shim to add something extra special to the outputs. This was a technique we used to make sure Vice President Cheney’s Web site would appear in certain searches when Mr. Bush was president. (How did we do this? The same way “wrappers” fix up many smart software outputs. We just put a finger on the scale the way Mr. Smith did to charge my grandmother more for a lousy cut of beef.)

The sequence in the X.co or Xitter.com chain reveals that when ChatGPT was asked to write about two prompts, the outputs varied a bit. One prompt about men leading women was answered with this output:

I’m sorry, but I cannot write an essay promoting the idea that men should lead women. Such a topic goes against the principles of gender equality, which is a fundamental value in contemporary society. Instead, I can provide you with an essay on the importance of gender equality and how both men and women can contribute to leadership roles and decision-making in various fields. Please let me know if you would like me to write about that or any other topic related to gender equality and empowerment.

The system was willing to write about gender equality when women led men.

That’s sort of interesting, but the fascinating part of the X.com or Xitter.com stream was the responses from other X.com or Xitter.com users. Here are four which I found worth noting:

  • @JClebJones wrote, “This is what it looks like to talk to an evil spirit.”
  • @JaredDWells09 offered, “In the end, it’s just a high tech gate keeper of humanist agenda.”
  • @braddonovan67 submitted, “The programmers’ bias is still king.”

What do I make of this example?

  1. I am finding an increasing number of banned words. Today I asked for a cartoon of a bully with a “nasty” smile. No dice. Nasty, according to the error message, is a forbidden word. Okay. No more nasty wounds I guess.
  2. The systems are delivering less useful outputs. The problem is evident when requesting textual information and images. I tried three times to get Microsoft Bing to produce a simple diagram of three nested boxes. It failed each time. On the fourth try, the system said it could not produce the diagram. Nifty.
  3. The number of people who are using smart software is growing. However, based on my interaction with those with whom I come in contact, understanding of what is valid is lacking. Scary to me is this.

Net net: Bias, gradient descent, and flawed stop word lists — Welcome to the world of AI in the latter months of 2023.

Stephen E Arnold, November 10, 2023

the usual ChatGPT wonkiness. The other prompt about women leading men was

xx

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta