Stop Smart Software! A Petition to Save the World! Signed by 350 Humans!

May 30, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

A “real” journalist (Kevin Roose), who was told to divorce his significant other for a chat bot published the calming, measured, non-clickbait story “AI Poses Risk of Extinction, Industry Leaders Warn.” What’s ahead for the forest fire of smart software activity? The headline explains a “risk of extinction.” What no screenshot of a Terminator robot saying”:

The strength of the human heart. The difference between us and machines. [Uplifting music]

Sadly, no.

Write up reports:

Eventually, some believe, A.I. could become powerful enough that it could create societal-scale disruptions within a few years if nothing is done to slow it down, though researchers sometimes stop short of explaining how that would happen. These fears are shared by numerous industry leaders, putting them in the unusual position of arguing that a technology they are building — and, in many cases, are furiously racing to build faster than their competitors — poses grave risks and should be regulated more tightly.

Isn’t the Gray Lady amplifying fear, uncertainty, and doubt? Didn’t IBM pay sales engineers to spread the FUD?

Enough. AI is bad. Stop those who refined the math and numerical recipes. Pass laws to regulate the AI technology. Act now. Save humanity. Several observations:

  1. The credibility of technologists who “develop” functions and then beg for rules is disingenuous. The idea is to practice self-control and judgment before inviting Mr. Hyde to brunch.
  2. With smart software chock full of “unknown unknowns”, how exactly are elected officials supposed to regulate a diffusing and enabling technology? Appealing to US and EU officials omits common sense in my opinion.
  3. The “fix” for the AI craziness may be emulating the Chinese approach: Do what the CCP wants or be reeducated. What a nation state can d with smart software is indeed a something to consider. But China has taken action and will move forward with militarization no matter what the US and EU do.

Silicon Valley type innovation has created a “myth of excellence.” One need look at the consequences of social media to see the consequences of high school science club decision making. Now a handful of individuals with the Silicon Valley DNA want external forces to reign in their money making experiments and personal theme parks. Sorry, folks. Internal control, ethical behavior, and integrity provide that to mature individuals.

A sheet of paper with “rules” and “regulations” is a bit late to the Silicon Valley game. And the Gray Lady? Chasing clicks in my opinion.

Stephen E Arnold, May 30, 2023

Regulate Does Not Mean Regulate. Leave the EU Does Not Mean Leave the EU. Got That?

May 30, 2023

I wrote about Sam AI-man’s explanation that he wants regulation. I pointed out that his definition of regulate means leaving OpenAI free to do whatever it can to ace out the Google and a handful of other big outfits chasing the pot of gold at the end of the AI rainbow.

I just learned from the self-defined trusted news source (Thomson Reuters) that Mr. AI-man has no plans to leave Europe. I understand. “Leave” does not mean leave as in depart, say adios, or hit the road, Jack.

ChatGPT Maker OpenAI Says Has No Plan to Leave Europe” reports:

OpenAI has no plans to leave Europe, CEO Sam Altman said on Friday, reversing a threat made earlier this week to leave the region if it becomes too hard to comply with upcoming laws on artificial intelligence.

I am not confused. Just as the company’s name OpenAI does not mean “open,” the AI-man’s communication skills are based on the probabilities of certain words following another word. Got it. The slippery fish with AI-man is that definition of the words in his mind do not regress to the mean. The words — like those of some other notable Silicon Valley high tech giants — reflect the deeper machinations of a machine-assisted superior intelligence.

Translated this means: Regulate means shaft our competitors. Leave means stay. Regulate means let those OpenAI sheep run through the drinking water of free range cattle

The trusted write up says:

Reacting to Altman’s tweet on Friday, Dutch MEP Kim van Sparrentak, who has worked closely on the AI draft rules, told Reuters she and her colleagues must stand up to pressure from tech companies…. Voluntary codes of conduct are not the European way.

What does this statement mean to AI-man?

I would suggest from my temporary office in clear thinking Washington, DC, not too much.

I look forward to the next hearing from AI-man. That will be equally easy to understand.

Stephen E Arnold, May 30, 2023

Probability: Who Wants to Dig into What Is Cooking Beneath the Outputs of Smart Software?

May 30, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

The ChatGPT and smart software “revolution” depends on math only a few live and breathe. One drawer in the pigeon hole desk of mathematics is probability. You know the coin flip example. Most computer science types avoid advanced statistics. I know because my great uncle Vladimir Arnold (yeah, the guy who worked with a so so mathy type named Andrey Kolmogorov, who was pretty good at mathy stuff and liked hiking in the winter in what my great uncle described as “minimal clothing.”)

When it comes to using smart software, the plumbing is kept under the basement floor. What people see are interfaces and application programming interfaces. Watching how the sausage is produced is not what the smart software outfits do. What makes the math interesting is that the system and methods are not really new. What’s new is that memory, processing power, and content are available.

If one pries up a tile on the basement floor, the plumbing is complicated. Within each pipe or workflow process are the mathematics that bedevil many college students: Inferential statistics. Those who dabble in the Fancy Math of smart software are familiar with Markov chains and Martingales. There are garden variety maths as well; for example, the calculations beloved of stochastic parrots.

5 15 smart software plumbing

MidJourney’s idea of complex plumbing. Smart software’s guts are more intricate with many knobs for acolytes to turn and many levers to pull for “users.”

The little secret among the mathy folks who whack together smart software is that humanoids set thresholds, establish boundaries on certain operations, exercise controls like those on an old-fashioned steam engine, and find inspiration with a line of code or a process tweak that arrived in the morning gym routine.

In short, the outputs from the snazzy interface make it almost impossible to understand why certain responses cannot be explained. Who knows how the individual humanoid tweaks interact as values (probabilities, for instance) interact with other mathy stuff. Why explain this? Few understand.

To get a sense of how contentious certain statistical methods are, I suggest you take a look at “Statistical Modeling, Causal Inference, and Social Science.” I thought the paper should have been called, “Why No One at Facebook, Google, OpenAI, and other smart software outfits can explain why some output showed up and some did not, why one response looks reasonable and another one seems like a line ripped from Fantasy Magazine.

In  a nutshell, the cited paper makes one point: Those teaching advanced classes in which probability and related operations are taught do not agree on what tools to use, how to apply the procedures, and what impact certain interactions produce.

Net net: Glib explanations are baloney. This mathy stuff is a serious problem, particularly when a major player like Google seeks to control training sets, off-the-shelf models, framing problems, and integrating the firm’s mental orientation to what’s okay and what’s not okay. Are you okay with that? I am too old to worry, but you, gentle reader, may have decades to understand what my great uncle and his sporty pal were doing. What Google type outfits are doing is less easily looked up, documented, and analyzed.

Stephen E Arnold, May 30, 2023

Smart Software Knows Right from Wrong

May 29, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

The AI gold rush is underway. I am not sure if the gold is the stuff of the King’s crown or one of those NFT confections. I am not sure what company will own the mine or sell the miner’s pants with rivets. But gold rush days remind me of forced labor (human indexers), claim jumping (hiring experts from one company to advantage another), and hydraulic mining (ethical and moral world enhancement). Yes, I see some parallels.

I thought of claim jumping and morals after reading “OpenAI Competitor Says Its Chatbot Has a Rudimentary Conscience: A Fascinating Concept.” The following snippet from the article caught my attention:

Jared Kaplan, a former OpenAI research consultant who went on to found Anthropic with a group of his former coworkers, told Wired that Claude is, in essence, learning right from wrong because its training protocols are “basically reinforcing the behaviors that are more in accord with the constitution, and discourages behaviors that are problematic.”

Please, read the original.

I want to capture several thoughts which flitted through my humanoid mind:

  1. What is right? What is wrong?
  2. What yardstick will be used to determine “rightness” or “wrongness.”
  3. What is the context for each right or wrong determination; for example, at the National Criminal Justice Training Center, there is a concept called “sexploitation.” The moral compass of You.com prohibits searching for information related to this trendy criminal activity? How will the Anthropic approach address the issue of a user with a “right” intent from a user with a “wrong” intent?

Net net: Baloney. Services will do what’s necessary to generate revenue. I know from watching the trajectories of the Big Tech outfits that right, wrong, ethics, and associated dorm room discussions wobble around and focus on getting rich or just having a job.

The goal for some will be to get their fingers on the knobs and control levers. Right or wrong?

Stephen E Arnold, May 29, 2023

Shall We Train Smart Software on Scientific Papers? That Is an Outstanding Idea!

May 29, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I read “Fake Scientific Papers Are Alarmingly Common. But New Tools Show Promise in Tackling Growing Symptom of Academia’s Publish or Perish Culture.” New tools sounds great. Navigate to the cited document to get the “real” information.

garbage in garbage out

MidJourney’s representation of a smart software system ingesting garbage and outputting garbage.

My purpose in mentioning this article is to ask a question:

In the last five years how many made up, distorted, or baloney filled journal articles have been produced?

The next question is,

How many of these sci-fi confections of scholarly research have been identified and discarded by the top smart software outfits like Facebook, Google, OpenAI, et al?

Let’s assume that 25 percent of the journal content is fakery.

A question I have is:

How does faked information impact the outputs of the smart software systems?

I can anticipate some answers; for example, “Well, there are a lot of papers so the flawed papers will represent a small portion of the intake data. The law of large numbers or some statistical jibber jabber will try to explain away erroneous information. Remember. Bad information is part of the human landscape. Does this mean smart software is a mirror of errors?

Do smart software outfits remove flawed information? If the peer review process cannot, what methods are the smart outfits using. Perhaps these companies should decide what’s correct and what’s incorrect? That sounds like a Googley-type idea, doesn’t it?

And finally, the third question about the impact of bad information on smart software “outputs” has an answer. No, it is not marketing jargon or a recycling of Google’s seven wonders of the AI world.

The answer, in my opinion, is garbage in and garbage out.

But you knew that, right?

Stephen E Arnold, Mary 29, 2023

Trust in Google and Its Smart Software: What about the Humans at Google?

May 26, 2023

The buzz about Google’s injection of its smart software into its services is crowding out other, more interesting sounds. For example, navigate to “Texas Reaches $8 Million Settlement With Google Over Blatantly False Pixel Ads: Google Settled a Lawsuit Filed by AG Ken Paxton for Alleged False Advertisements for its Google Pixel 4 Smartphone.”

The write up reports:

A press release said Google was confronted with information that it had violated Texas laws against false advertising, but instead of taking steps to correct the issue, the release said, “Google continued its deceptive advertising, prioritizing profits over truthfulness.”

Google is pushing forward with its new mobile devices.

Let’s consider Google’s seven wonders of its software. You can find these at this link or summarized in my article “The Seven Wonders of the Google AI World.”

Let’s consider principle one: Be socially beneficial.

I am wondering how the allegedly deceptive advertising encourages me to trust Google.

Principle 4 is Be accountable to people.

My recollection is that Google works overtime to avoid being held accountable. The company relies upon its lawyers, its lobbyists, and its marketing to float above the annoyances of nation states. In fact, when greeted with substantive actions by the European Union, Google stalls and does not make available its latest and greatest services. The only accountability seems to be a legal action despite Google’s determined lawyerly push back. Avoiding accountability requires intermediaries because Google’s senior executives are busy working on principles.

Kindergarten behavior.

5 13 kids squabbling

MidJourney captures the thrill of two young children squabbling over a piggy bank. I wonder if MidJourney knows what is going in the newly merged Google smart software units.

Google approaches some problems like kids squabbling over a piggy bank.

Net net: The Texas fine makes clear that some do not trust Google. The “principles” are marketing hoo hah. But everyone loves Google, including me, my French bulldog, and billions of users worldwide. Everyone will want a new $1800 folding Pixel, which is just great based on the marketing information I have seen. It has so many features and works wonders.

Stephen E Arnold, May 26, 2023

The Return: IBM Watsonx!

May 26, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

It is no surprise IBM’s entry into the recent generative AI hubbub is a version of Watson, the company’s longtime algorithmic representative. Techspot reports, “IBM Unleashes New AI Strategy with ‘watsonx’.” The new suite of tools was announced at the company’s recent Think conference. Note “watsonx” is not interchangeable with “Watson.” The older name with the capital letter and no trendy “x” is to be used for tools individuals rather than company-wide software. That won’t be confusing at all. Writer Bob O’Donnell describes the three components of watsonx:

“Watsonx.ai is the core AI toolset through which companies can build, train, validate and deploy foundation models. Notably, companies can use it to create original models or customize existing foundation models. Watsonx.data, is a datastore optimized for AI workloads that’s used to gather, organize, clean and feed data sources that go into those models. Finally, watsonx.governance is a tool for tracking the process of the model’s creation, providing an auditable record of all the data going into the model, how it’s created and more.Another part of IBM’s announcement was the debut of several of its own foundation models that can be used with the watsonx toolset or on their own. Not unlike others, IBM is initially unveiling a LLM-based offering for text-based applications, as well as a code generating and reviewing tool. In addition, the company previewed that it intends to create some additional industry and application-specific models, including ones for geospatial, chemistry, and IT operations applications among others. Critically, IBM said that companies can run these models in the cloud as a service, in a customer’s own data center, or in a hybrid model that leverages both. This is an interesting differentiation because, at the moment, most model providers are not yet letting organizations run their models on premises.”

Just to make things confusing, er, offer more options, each of these three applications will have three different model architectures. On top of that, each of these models will be available with varying numbers of parameters. The idea is not, as it might seem, to give companies decision paralysis but to provide flexibility in cost-performance tradeoffs and computing requirements. O’Donnell notes watsonx can also be used with open-source models, which is helpful since many organizations currently lack staff able build their own models.

The article notes that, despite the announcement’s strategic timing, it is clear watsonx marks a change in IBM’s approach to software that has been in the works for years: generative AI will be front and center for the foreseeable future. Kinda like society as a whole, apparently.

Cynthia Murrell, May 26, 2023

OpenAI Clarifies What “Regulate” Means to the Sillycon Valley Crowd

May 25, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Sam AI-man begged (at least he did not get on his hands and knees) the US Congress to regulate artificial intelligence (whatever that means). I just read “Sam Altman Says OpenAI Will Leave the EU if There’s Any Real AI Regulation.” I know I am old. I know I lose my car keys a couple of times every 24 hours. I do recall Mr. AI-man wanted regulation.

However, the write up reports:

Though unlike in the AI-friendly U.S., Altman has threatened to take his big tech toys to the other end of the sandbox if they’re not willing to play by his rules.

The vibes of the Zuckster zip through my mind. Facebook just chugs along, pays fines, and mostly ignores regulators. China seems to be an exception for Facebook, the Google, and some companies I don’t know about. China had a mobile death-mobile. A person accused and convicted would be executed in the mobile death van as soon as it arrived at the location where the convicted bad actor was. Re-education camps and mobile death-mobiles suggest that some US companies choose to exit China. Lawyers who have to arrive quickly or their client has been processed are not much good in some of China’s efficient state machines. Fines, however, are okay. Write a check and move on.

Mr. AI-man is making clear that the word “regulate” means one thing to Mr. AI-man and another thing to those who are not getting with the smart software program. The write up states:

Altman said he didn’t want any regulation that restricted users’ access to the tech. He told his London audience he didn’t want anything that could harm smaller companies or the open source AI movement (as a reminder, OpenAI is decidedly more closed off as a company than it’s ever been, citing “competition”). That’s not to mention any new regulation would inherently benefit OpenAI, so when things inevitably go wrong it can point to the law to say they were doing everything they needed to do.

I think “regulate” means what the declining US fast food outfit who told me “have it your way” meant. The burger joint put in a paper bag whatever the professionals behind the counter wanted to deliver. Mr. AI-man doesn’t want any “behind the counter” decision making by a regulatory cafeteria serving up its own version of lunch.

Mr. AI-man wants “regulate” to mean his way.

In the US, it seems, that is exactly what big tech and promising venture funded outfits are going to get; that is, whatever each company wants. Competition is good. See how well OpenAI and Microsoft are competing with Facebook and Google. Regulate appears to mean “let us do what we want to do.”

I am probably wrong. OpenAI, Google, and other leaders in smart software are at this very moment consuming the Harvard Library of books to read in search of information about ethical behavior. The “moral” learning comes later.

Net net: Now I understand the new denotation of “regulate.” Governments work for US high-tech firms. Thus, I think the French term laissez-faire nails it.

Stephen E Arnold, May 25, 2023

Need a Guide to Destroying Social Cohesion: Chinese Academics Have One for You

May 25, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

The TikTok service is one that has kicked the Google in its sensitive bits. The “algorithm” befuddles both European and US “me too” innovators. The ability of short, crunchy videos has caused restaurant chefs to craft food for TikTok influencers who record meals. Chefs!

What other magical powers can a service like TikTok have? That’s a good question, and it is one that the Chinese academics have answered. Navigate to “Weak Ties Strengthen Anger Contagion in Social Media.” The main idea of the research is to validate a simple assertion: Can social media (think TikTok, for example) take a flame thrower to social ties? The answer is, “Sure can.” Will a social structure catch fire and collapse? “Sure can.”

5 19 social structure on fire

A frail structure is set on fire by a stream of social media consumed by a teen working in his parents’ garden shed. MidJourney would not accept the query a person using a laptop setting his friends’ homes on fire. Thanks, Aunt MidJourney.

The write up states:

Increasing evidence suggests that, similar to face-to-face communications, human emotions also spread in online social media.

Okay, a post or TikTok video sparks emotion.

So what?

…we find that anger travels easily along weaker ties than joy, meaning that it can infiltrate different communities and break free of local traps because strangers share such content more often. Through a simple diffusion model, we reveal that weaker ties speed up anger by applying both propagation velocity and coverage metrics.

The authors note:

…we offer solid evidence that anger spreads faster and wider than joy in social media because it disseminates preferentially through weak ties. Our findings shed light on both personal anger management and in understanding collective behavior.

I wonder if any psychological operations professionals in China or another country with a desire to reduce the efficacy of the American democratic “experiment” will find the research interesting?

Stephen  E Arnold, May 25, 2023

Top AI Tools for Academic Researchers

May 25, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Like it or not, AI is reshaping academia. Setting aside the thorny issue of cheating, we see the technology is also changing how academic research is performed. In fact, there are already many AI options for researchers to pick from. Euronews narrows down the choices in, “The Best AI Tools to Power your Academic Research.” Writer Camille Bello shares the top five options as chosen by Mushtaq Bilal, a researcher at the University of Southern Denmark. She introduces the list with an important caveat: one must approach these tools carefully for accurate results. She writes:

“[Bilal] believes that if used thoughtfully, AI language models could help democratise education and even give way to more knowledge. Many experts have pointed out that the accuracy and quality of the output produced by language models such as ChatGPT are not trustworthy. The generated text can sometimes be biased, limited or inaccurate. But Bilal says that understanding those limitations, paired with the right approach, can make language models ‘do a lot of quality labour for you,’ notably for academia.

Incremental prompting to create a ‘structure’

To create an academia-worthy structure, Bilal says it is fundamental to master incremental prompting, a technique traditionally used in behavioural therapy and special education. It involves breaking down complex tasks into smaller, more manageable steps and providing prompts or cues to help the individual complete each one successfully. The prompts then gradually become more complicated. In behavioural therapy, incremental prompting allows individuals to build their sense of confidence. In language models, it allows for ‘way more sophisticated answers’. In a Twitter thread, Bilal showed how he managed to get ChatGPT to provide a ‘brilliant outline’ for a journal article using incremental prompting.”

See the write-up for this example of incremental prompting as well as a description of each entry on the list. The tools include: Consensus, Elicit.org, Scite.ai, Research Rabbit, and ChatPDF. The write-up concludes with a quote from Bill Gates, who asserts AI is destined to be as fundamental “as the creation of the microprocessor, the personal computer, the Internet, and the mobile phone.” Researchers may do well to embrace the technology sooner rather than later.

Cynthia Murrell, May 25, 2023

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta