Responding to the PR Buzz about ChatGPT: A Tale of Two Techies

January 24, 2023

One has to be impressed with the PR hype about ChatGPT. One can find tip sheets for queries (yes, queries matter… a lot), ideas for new start ups, and Sillycon Valley pundits yammering on podcasts. At an Information Industry Association meting in Boston, Massachusetts, a person whom I think was called Marvin or Martin Wein-something made an impassioned statement about the importance of artificial intelligence. I recall his saying, “It is happening. Now.”

Marvin or Martin made that statement which still sticks in my mind in 1982 or so. That works out to 40 years ago.

What strikes me this morning is the difference between the response of Facebook and Google. This is a Tale of Two Techies.

In the case of Google, it is Red Alert time. The fear is palpable among the senior managers. How do I know? I read “Google Founders Return As ChatGPT Threatens Search Business.” I could trot out some parallels between Google management’s fear and the royals threatened by riff raff. Make no mistake. The Googlers have quantum supremacy and the DeepMind protein and game playing machine. I recall reading or being told that Google has more than 20 applications that will be available… soon. (Wasn’t that type of announcement once called vaporware?) The key point is that the Googlers are frightened, and like Disney, have had to call the team of Brin and Page to revivify the thinking about the threat to the search business. I want to remind you that the search business was inspired by Yahoo’s Overture approach. Google settled some litigation and the rest is history. Google became an alleged monopoly and Yahoo devolved into a spammy email service.

And what about Facebook? I noted this article: “ChatGPT Is Not Particularly Innovative and Nothing Revolutionary, Says Meta’s Chief AI Scientist.” The write up explains that Meta’s stance with regard to the vibe machine ChatGPT is “meh.” I think Meta or the Zuckbook does care, but Meta has a number of other issues to which the proud firm must respond. Smart software that seems to be a Swiss Army knife of solutions is “nothing revolutionary.” Okay.

Let’s imagine we are in college in one of those miserable required courses in literature. Our assignment is to analyze the confection called the Tale of Two Techies. What’s the metaphorical pivot for this soap opera?

Here’s my take:

  • Meta is either too embarrassed, too confused, or too overwhelmed with on-going legal hassles to worry too much about ChatGPT. Putting on the “meh” is good enough. The company seems to be saying, “We don’t care too much… at least in public.”
  • Google is running around with its hair on fire. The senior management team’s calling on the dynamic duo to save the day is indicative of the mental short circuits the company exhibits.

Net net: Good, bad, or indifferent ChatGPT makes clear the lack of what one might call big time management thinking. Is this new? Sadly, no.

Stephen E Arnold, January 24, 2023

OpenAI Working on Proprietary Watermark for Its AI-Generated Text

January 24, 2023

Even before OpenAI made its text generator GPT-3 available to the public, folks were concerned the tool was too good at mimicking the human-written word. For example, what is to keep students from handing their assignments off to an algorithm? (Nothing, as it turns out.) How would one know? Now OpenAI has come up with a solution—of sorts. Analytics India Magazine reports, “Generated by Human or AI: OpenAI to Watermark its Content.” Writer Pritam Bordoloi describes how the watermark would work:

“We want it to be much harder to take a GPT output and pass it off as if it came from a human,’ [OpenAI’s Scott Aaronson] revealed while presenting a lecture at the University of Texas at Austin.  ‘For GPT, every input and output is a string of tokens, which could be words but also punctuation marks, parts of words, or more—there are about 100,000 tokens in total. At its core, GPT is constantly generating a probability distribution over the next token to generate, conditional on the string of previous tokens,’ he said in a blog post documenting his lecture.  So, whenever an AI is generating text, the tool that Aaronson is working on would embed an ‘unnoticeable secret signal’ which would indicate the origin of the text. ‘We actually have a working prototype of the watermarking scheme, built by OpenAI engineer Hendrik Kirchner.’ While you and I might still be scratching our heads about whether the content is written by an AI or a human, OpenAI—who will have access to a cryptographic key—would be able to uncover a watermark, Aaronson revealed.”

Great! OpenAI will be able to tell the difference. But … how does that help the rest of us? If the company just gifted the watermarking key to the public, bad actors would find a way around it. Besides, as Bordoloi notes, that would also nix OpenAI’s chance to make a profit off it. Maybe it will sell it as a service to certain qualified users? That would be an impressive example of creating a problem and selling the solution—a classic business model. Was this part of the firm’s plan all along? Plus, the killer question, “Will it work?”

Cynthia Murrell, January 24, 2023

Why Governments and Others Outsource… Almost Everything

January 24, 2023

I read a very good essay called “Questions for a New Technology.” The core of the write up is a list of eight questions. Most of these are problems for full-time employees. Let me give you one example:

Are we clear on what new costs we are taking on with the new technology? (monitoring, training, cognitive load, etc)

The challenge strike me as the phrase “new technology.” By definition, most people in an organization will not know the details of the new technology. If a couple of people do, these individuals have to get the others up to speed. The other problem is that it is quite difficult for humans to look at a “new technology” and know about the knock on or downstream effects. A good example is the craziness of Facebook’s dating objective and how the system evolved into a mechanism for social revolution. What in-house group of workers can tackle problems like that once the method leaves the dorm room?

The other questions probe similarly difficult tasks.

But my point is that most governments do not rely on their full time employees to solve problems. Years ago I gave a lecture at Cebit about search. One person in the audience pointed out that in that individual’s EU agency, third parties were hired to analyze and help implement a solution. The same behavior popped up in Sweden, the US, and Canada and several other countries in which I worked prior to my retirement in 2013.

Three points:

  1. Full time employees recognize the impossibility of tackling fundamental questions and don’t really try
  2. The consultants retained to answer the questions or help answer the questions are not equipped to answer the questions either; they bill the client
  3. Fundamental questions are dodged by management methods like “let’s push decisions down” or “we decide in an organic manner.”

Doing homework and making informed decisions is hard. A reluctance to learn, evaluate risks, and implement in a thoughtful manner are uncomfortable for many people. The result is the dysfunction evident in airlines, government agencies, hospitals, education, and many other disciplines. Scientific research is often non reproducible. Is that a good thing? Yes, if one lacks expertise and does not want to accept responsibility.

Stephen E Arnold, January 25, 2023

Quote to Note: AI and the Need to Do

January 23, 2023

The source is a wizard from Stanford University. The quote to note appears in “Stanford Faculty Weigh In on ChatGPT’s Shake-Up in Education.

“We need the use of this technology to be ethical, equitable, and accountable.”

Several questions come to mind:

  1. Is the Stanford Artificial Intelligence Lab into “ethical, equitable, and accountable”?
  2. Is the Stanford business school into “ethical, equitable, and accountable”?
  3. Is the Stanford computer science units into ethical, equitable, and accountable”?

Nice sentiment for a sentiment analysis program. Disconnected from reality? From my perspective, absolutely.

Stephen E Arnold, January 23, 2023

Google Search: A Hellscape? Nah, the Greasy Mess Is Much Bigger

January 23, 2023

I read “Google vs. ChatGPT Told by Aglio e Olio.”

The write up makes it clear that the author is definitely not Googley. Let’s look at a handful of statements and then consider them in the context of the greasy stuff and, of course, the crackling hellscape analogy. Imagine a hellscape on Shoreline Drive. I never noticed pools of flame, Beelzebug hanging in the parking lot, or people in chains being walked by one of Satan’s pals. Maybe I was not sufficiently alert?

How about this statement:

A single American company operating as a bottleneck behind the world’s information is a dangerous, and inefficient proposition and a big theme of the Margins is that monopolies are bad so it’s also on brand.

After 25 years, the light bulb clicked on and the modern Archimedes has discovered the secret to Googzilla. I recall the thrill of the Yahoo settlement and the green light for the initial public offering. I recall the sad statements of Foundem, which “found” itself going nowhere fast in search results. I recall a meeting in Paris in which comments about the difficulty of finding French tax form links in Google.fr search results. I remember the owner of a major Web site shouting at lunch about his traffic dropping from two million per month to 200,000. Ah, memories. But the reason these anecdotes come to my mind is a will group of people who found free and convenient more valuable than old-fashioned research. You remember. Lycos, libraries, conversations, and those impedimenta to actual knowledge work.

Also, how about this statement?

I am assuming the costs and the risk I’ve mentioned above has been what’s been making Google keep its cards closer to its chest.

Ah, ha. Google is risk averse. As organization become older and larger, what does one expect. I think of Google like Tom Brady or Christiano Ronaldo. Google is not able to accept the fact that it is older, has a bum knee, and has lost some of its fangs. Remember the skeleton of the dinosaur in front of one of Google’s buildings. It was, as I recall, a Tyrannosaurus Rex. But it was missing a fang or two. Then the weather changed, and the actual dino died. Google is not keeping cards closer to its chest; Google does not know what to do. Regulators are no longer afraid to fine the big reptile again and again. Googlers become Xooglers and suggest that the company is losing the zip in its step. Some choose to compete and create a for fee search system. Good luck with that! Looking at the skeleton, those cards could fall through the bones and fall, scattered on the concrete.

And what about this statement?

the real reason Google is at risk that thanks to their monopoly position, the folks over at Mountain View have left their once-incredible search experience degenerate into a spam-ridden, SEO-fueled hellscape.

Catchy. Search engine optimization, based on my observations of the Google’s antics, was a sure-fire way to get marketers into dancing the Google hand jive. Then when SEO failed (as it usually would), those SEO experts became sales professionals for Google advertising and instructors in the way to create Web sites and content shaped to follow the Google jazz band.

Net net: The Google is big, and it is not going anywhere quickly. But the past of Google is forgotten by many but it includes a Google Glass attempted suicide, making babies in the legal department, and a heroin overdose on a yacht. Ah, bad search. What about a deeper look? Nah, just focus on ChatGPT, the first of many who will now probe the soft underbelly of Googzilla. Oh, sorry, Googzilla is a skeleton. The real beast is gone.

Stephen E Arnold, January 23, 2023

How to Make Chinese Artificial Intelligence Professionals Hope Like Happy Bunnies

January 23, 2023

Happy New Year! It is the Year of the Rabbit, and the write up “Is Copyright Easting AI?” may make some celebrants happier than the contents of a red envelop. The article explains that the US legal system may derail some of the more interesting, publicly accessible applications of smart software. Why? US legal eagles and the thicket of guard rails which comprise copyright.

The article states:

… neural network developers, get ready for the lawyers, because they are coming to get you.

That means the the interesting applications on the “look what’s new on the Internet” news service Product Hunt will disappear. Only big outfits can afford to bring and fight some litigation. When I worked as an expert witness, I learned that money is not an issue of concern for some of the parties to a lawsuit. Those working as a robot repair technician for a fast food chain will want to avoid engaging in a legal dispute.

The write up also says:

If the AI industry is to survive, we need a clear legal rule that neural networks, and the outputs they produce, are not presumed to be copies of the data used to train them. Otherwise, the entire industry will be plagued with lawsuits that will stifle innovation and only enrich plaintiff’s lawyers.

I liked the word “survive.” Yep, continue to exist. That’s an interesting idea. Let’s assume that the US legal process brings AI develop to a halt. Who benefits? I am a dinobaby living in rural Kentucky. Nevertheless, it seems to me that a country will just keep on working with smart software informed by content. Some of the content may be a US citizen’s intellectual property, possibly a hard drive with data from Los Alamos National Laboratory, or a document produced by a scientific and technical publisher.

It seems to me that smart software companies and research groups in a country with zero interest in US laws can:

  1. Continue to acquire content by purchase, crawling, or enlisting the assistance of third parties
  2. Use these data to update and refine their models
  3. Develop innovations not available to smart software developers in the US.

Interesting, and with the present efficiency of some legal and regulatory system, my hunch is that bunnies in China are looking forward to 2023. Will an innovator use enhanced AI for information warfare or other weapons? Sure.

Stephen E Arnold, January 23, 2023

ChatGPT Spells Trouble for the Google

January 20, 2023

The estimable New York Times published “Google Calls In Help From Larry Page and Sergey Brin for A.I. Fight.” [Note: You will find this write up behind a paywall, of course.] You may know that Google is on “Red Alert” because people are (correctly or incorrectly) talking about ChatGPT as the next big thing. Nothing is more terrifying that once being a next big thing and learning that there is another next big thing. The former next big thing is caught in a phase change; therefore, discomfort replaces comfort.

The Gray Lady states:

The re-engagement of Google’s founders, at the invitation of the company’s current chief executive, Sundar Pichai, emphasized the urgency felt among many Google executives about artificial intelligence and that chatbot, ChatGPT.

Ah, ha. Re-engagement. Messrs. Brin and Page have kept a low profile. Mr. Brin manages his social life, and Mr. Page enjoys his private island. Now bang! Those clever Backrub innovators are needed by the Google senior managers.

The Google is in action mode. I have mentioned papers by Google which explain the really super duper smart software that will be racing down the Information Superhighway. Soon. Any day now. The NYT story states:

Google now intends to unveil more than 20 new products and demonstrate a version of its search engine with chatbot features this year

And the weapon of choice? A PowerPoint type of presentation. Okay. A slide deck. the promise of great things from innovators. The hitch in the git-along is that the sometimes right, sometimes wrong ChatGPT implementations are everywhere. I pointed a major Web site operator at the You.com writing function. Those folks were excited and sent me a machine generated story, saying, “With a little editing, this is really good.” Was it good? What do I know about electric Corvettes, but these people had a Eureka moment. Look up Corvette on Google and what do you get? Ads. Use You.com and what do you get, “Something that excited a big Web site owner.”

The NYT included a quote from an expert, of course. Here’s a snippet I circled:

“This is a moment of significant vulnerability for Google,” said D. Sivakumar, a former Google research director who helped found a start-up called Tonita, which makes search technology for e-commerce companies. “ChatGPT has put a stake in the ground, saying, ‘Here’s what a compelling new search experience could look like.’” Mr. Sivakumar added that Google had overcome previous challenges and could deploy its arsenal of A.I. to stay competitive.

Notice the phrase “could deploy its arsenal of A.I. to stay competitive.” To me, this Xoogler is saying, “Oh, oh. Competition but Google could take action? Could is the guts of a PowerPoint presentation. Okay, but ChatGPT is doing. No could needed.

But the NYT article includes information that Google wants to be responsible. That’s a great idea. So was solving death or flying Loon balloons over Sri Lanka. The problem is that ChatGPT has caught the eye of Google’s arch enemy Microsoft and a platoon of business types at Davos. ChatGPT caused the NYT to write an article about two rich innovators returning like Disney executives to help out a sickly looking Mickey Mouse.

I am looking forward to watching Googzilla break free of the slime mold around its paws and responding. Time to abandon the Foosball and go, go, go.

Stephen E Arnold, January 20, 2023

The LaundroGraph: Bad Actors Be On Your Toes

January 20, 2023

Now here is a valuable use of machine learning technology. India’s DailyHunt reveals, “This Deep Learning Technology Is a Money-Launderer’s Worst Nightmare.” The software, designed to help disrupt criminal money laundering operations, is the product of financial data-science firm Feedzai of Portugal. We learn:

“The Feedzai team developed LaundroGraph, a self-supervised model that might reduce the time-consuming process of assessing vast volumes of financial interactions for suspicious transactions or monetary exchanges, in a paper presented at the 3rd ACM International Conference on AI in Finance. Their approach is based on a graph neural network, which is an artificial neural network or ANN built to process vast volumes of data in the form of a graph.”

The AML (anti-money laundering) software simplifies the job of human analysts, who otherwise must manually peruse entire transaction histories in search of unusual activity. The article quotes researcher Mario Cardoso:

“Cardoso explained, ‘LaundroGraph generates dense, context-aware representations of behavior that are decoupled from any specific labels.’ ‘It accomplishes this by utilizing both structural and features information from a graph via a link prediction task between customers and transactions. We define our graph as a customer-transaction bipartite graph generated from raw financial movement data.’ Feedzai researchers put their algorithm through a series of tests to see how well it predicted suspicious transfers in a dataset of real-world transactions. They discovered that it had much greater predictive power than other baseline measures developed to aid anti-money laundering operations. ‘Because it does not require labels, LaundroGraph is appropriate for a wide range of real-world financial applications that might benefit from graph-structured data,’ Cardoso explained.”

For those who are unfamiliar but curious (like me), navigate to this explanation of bipartite graphs. The future applications Cardoso envisions include detecting other financial crimes like fraud. Since the researchers intend to continue developing their tools, financial crimes may soon become much trickier to pull off.

Cynthia Murrell, January 20, 2022

Is SkyNet a Reality or a Plot Device?

January 20, 2023

We humans must resist the temptation to outsource our reasoning to an AI, no matter how trustworthy it sounds. This is because, as iai News points out, “All-Knowing Machines Are a Fantasy.” Society is now in danger of confusing fiction with reality, a mistake that could have serious consequences. Professors Emily M. Bender and Chirag Shah observe:

“Decades of science fiction have taught us that a key feature of a high-tech future is computer systems that give us instant access to seemingly limitless collections of knowledge through an interface that takes the form of a friendly (or sometimes sinisterly detached) voice. The early promise of the World Wide Web was that it might be the start of that collection of knowledge. With Meta’s Galactica, OpenAI’s ChatGPT and earlier this year LaMDA from Google, it seems like the friendly language interface is just around the corner, too. However, we must not mistake a convenient plot device—a means to ensure that characters always have the information the writer needs them to have—for a roadmap to how technology could and should be created in the real world. In fact, large language models like Galactica, ChatGPT and LaMDA are not fit for purpose as information access systems, in two fundamental and independent ways.”

The first problem is that language models do what they are built to do very well: they produce text that sounds human-generated. Authoritative, even. Listeners unconsciously ascribe human thought processes to the results. In truth, algorithms lack understanding, intent, and accountability, making them inherently unreliable as unvetted sources of information.

Next is the nature of information itself. It is impossible for an AI to tap into a comprehensive database of knowledge because such a thing does not exist and probably never will. The Web, with its contradictions, incomplete information, and downright falsehoods, certainly does not qualify. Though for some queries a quick, straightforward answer is appropriate (how many tablespoons in a cup?) most are not so simple. One must compare answers and evaluate provenance. In fact, the authors note, the very process of considering sources helps us refine our needs and context as well as asses the data itself. We miss out on all that when, in search of a quick answer, we accept the first response from any search system. That temptation is hard enough to resist with a good old fashioned Google search. The human-like interaction with chatbots just makes it more seductive. The article notes:

“Over both evolutionary time and every individual’s lived experience, natural language to-and-fro has always been with fellow human beings. As we encounter synthetic language output, it is very difficult not to extend trust in the same way as we would with a human. We argue that systems need to be very carefully designed so as not to abuse this trust.”

That is a good point, though AI developers may not be eager to oblige. It remains up to us humans to resist temptation and take the time to think for ourselves.

Cynthia Murrell, January 20, 2023

Eczema? No, Terminator Skin

January 20, 2023

Once again, yesterday’s science fiction is today’s science fact. ScienceDaily reports, “Soft Robot Detects Damage, Heals Itself.” Led by Rob Shepherd, associate professor of mechanical and aerospace engineering, Cornell University’s Organic Robotics Lab has developed stretchable fiber-optic sensors. These sensors could be incorporated in soft robots, wearable tech, and other components. We learn:

“For self-healing to work, Shepard says the key first step is that the robot must be able to identify that there is, in fact, something that needs to be fixed. To do this, researchers have pioneered a technique using fiber-optic sensors coupled with LED lights capable of detecting minute changes on the surface of the robot. These sensors are combined with a polyurethane urea elastomer that incorporates hydrogen bonds, for rapid healing, and disulfide exchanges, for strength. The resulting SHeaLDS — self-healing light guides for dynamic sensing — provides a damage-resistant soft robot that can self-heal from cuts at room temperature without any external intervention. To demonstrate the technology, the researchers installed the SHeaLDS in a soft robot resembling a four-legged starfish and equipped it with feedback control. Researchers then punctured one of its legs six times, after which the robot was then able to detect the damage and self-heal each cut in about a minute. The robot could also autonomously adapt its gait based on the damage it sensed.”

Some of us must remind ourselves these robots cannot experience pain when we read such brutal-sounding descriptions. As if to make that even more difficult, we learn this material is similar to human flesh: it can easily heal from cuts but has more trouble repairing burn or acid damage. The write-up describes the researchers’ next steps:

“Shepherd plans to integrate SHeaLDS with machine learning algorithms capable of recognizing tactile events to eventually create ‘a very enduring robot that has a self-healing skin but uses the same skin to feel its environment to be able to do more tasks.'”

Yep, sci-fi made manifest. Stay tuned.

Cynthia Murrell, January 20, 2023

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta