Handwaving at Light Speed: Control Smart Software Now!

June 13, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]_thumbNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Here is an easy one: Vox ponders, “What Will Stop AI from Flooding the Internet with Fake Images?” “Nothing” is the obvious answer. Nevertheless, tech companies are making a show of making an effort. Writer Shirin Ghaffary begins by recalling the recent kerfuffle caused by a realistic but fake photo of a Pentagon explosion. The spoof even affected the stock market, though briefly. We are poised to see many more AI-created images swamp the Internet, and they won’t all be so easily fact checked. The article explains:

“This isn’t an entirely new problem. Online misinformation has existed since the dawn of the internet, and crudely photoshopped images fooled people long before generative AI became mainstream. But recently, tools like ChatGPT, DALL-E, Midjourney, and even new AI feature updates to Photoshop have supercharged the issue by making it easier and cheaper to create hyper realistic fake images, video, and text, at scale. Experts say we can expect to see more fake images like the Pentagon one, especially when they can cause political disruption. One report by Europol, the European Union’s law enforcement agency, predicted that as much as 90 percent of content on the internet could be created or edited by AI by 2026. Already, spammy news sites seemingly generated entirely by AI are popping up. The anti-misinformation platform NewsGuard started tracking such sites and found nearly three times as many as they did a few weeks prior.”

Several ideas are being explored. One is to tag AI-generated images with watermarks, metadata, and disclosure labels, but of course those can be altered or removed. Then there is the tool from Adobe that tracks whether images are edited by AI, tagging each with “content credentials” that supposedly stick with a file forever. Another is to approach from the other direction and stamp content that has been verified as real. The Coalition for Content Provenance and Authenticity (C2PA) has created a specification for this purpose.

But even if bad actors could not find ways around such measures, and they can, will audiences care? So far it looks like that is a big no. We already knew confirmation bias trumps facts for many. Watermarks and authenticity seals will hold little sway for those already inclined to take what their filter bubbles feed them at face value.

Cynthia Murrell, June 13, 2023

Two Polemics about the Same Thing: Info Control

June 12, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]_thumbNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Polemics are fun. The term, as I use it, means:

a speech or piece of writing expressing a strongly critical attack on or controversial opinion about someone or something.

I took the definition from Google’s presentation of the “Oxford Languages.” I am not sure what that means, but since we are considering two polemics, the definition is close enough for horseshoes. Furthermore, polemics are not into facts, verifiable assertions, or hard data. I think of polemics as blog posts by individuals whom some might consider fanatics, apologists, crusaders, or zealots.

Ah, you don’t agree? Tough noogies, gentle reader.

The first document I read and fed into Browserling’s free word frequency tool was Marc Andreessen’s delightful “Why AI Will Save the World.” The document has a repetitive contents listing, which some readers may find useful. For me, the effort to stay on track added duplicate words.

The second document I read and stuffed into the Browserling tool was the entertaining, and in my opinion, fluffy, Aeropagitica, made available by Dartmouth.

The mechanics of the analysis were simple. I compared the frequency of words which I find indicative of a specific rhetorical intent. Mr. Andreessen is probably more well known to modern readers than John Milton. Mr. Andreessen’s contribution to polemic literature is arguably more readable. There’s the clumsy organization impedimenta. There are shorter sentences. There are what I would describe as Silicon Valley words. Furthermore, based on Bing, Google, and Yandex searches for the text of the document, one can find Mr. Andreessen’s contribution to the canon in more places than John Milton’s lame effort. I want to point out that Mr. Milton’s polemic is longer than Mr. Andreessen’s by a couple of orders of magnitude. I did what most careless analysts would do: I took the full text of Mr. Andreessen’s screed and snagged the first 8000 words of Mr. Milton’s writing. A writing known to bring tears to the eyes of first year college students asked to read the prose and write an analytic essay about Aeropagitica in 500 words. Good training for either a debate student, a future lawyer, or a person who wants to write for Reader’s Digest magazine I believe.

So what did I find?

First, both Mr. Andreessen and Mr. Milton needed to speak out for their ideas. Mr. Andreessen is an advocate of smart software. Mr. Milton wanted a censorship free approach to publishing. Both assumed that “they” or people not on their wave length needed convincing about the importance of their ideas. It is safe to say that the audiences for these two polemics are not clued into the subject. Mr. Andreessen is speaking to those who are jazzed on smart software, neglecting to point out that smart software is pretty common in the online advertising sector. Mr. Milton assumed that censorship was a new threat, electing to ignore that religious authorities, educational institutions, and publishers were happily censoring information 24×7. But that’s the world of polemicists.

Second, what about the words used by each author. Since this is written for my personal blog, I will boil down my findings to a handful of words.

The table below presents selected 12 words and a count of each:

Words

Andreessen

Milton

AI

157

0

All

34

54

Ethics

1

0

Every

20

8

Everyone

7

0

Everything

6

0

Everywhere

4

0

Infinitely

9

0

Moral

9

0

Morality

2

0

Obviously

4

0

Should

23

22

Would

21

10

Several observations:

  1. Messrs. Andreessen and Milton share an absolutist approach. The word “all” figures prominently in both polemics.
  2. Mr. Andreessen uses “every” words to make clear that AI is applicable to just about anything one cares to name. Logical? Hey, these are polemics. The logic is internal.
  3. Messrs. Andreessen share a fondness for adulting. Note the frequency of “should” and “would.”
  4. Mr. Andreessen has an interest in ethical and moral behavior. Mr. Milton writes around these notions.

Net net: Polemics are designed as marketing collateral. Mr. Andreessen is marketing as is Mr. Milton. Which pitch is better? The answer depends on the criteria one uses to judge polemics. I give the nod to Mr. Milton. His polemic is longer, has freight train scale sentences, and is for a modern college freshman almost unreadable. Mr. Andreessen’s polemic is sportier. It’s about smart software, not censorship directly. However, both polemics boil down to who has his or her hands on the content levers.

Stephen E Arnold, June 12, 2023

Bad News for Humanoids: AI Writes Better Pitch Decks But KFC Is Hiring

June 12, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Who would have envisioned a time when MBA with undergraduate finance majors would be given an opportunity to work at a Kentucky Fried Chicken store. What was the slogan about fingers? I can’t remember.

“If You’re Thinking about Writing Your Own Pitch Decks, Think Again” provides some interesting information. I assume that today’s version of Henry Robinson Luce’s flagship magazine (no the Sports Illustrated swimsuit edition) would shatter the work life of those who create pitch decks. A “pitch deck” is a sonnet for our digital era. The phrase is often associated with a group of PowerPoint slides designed to bet a funding source to write a check. That use case, however, is not where pitch decks come into play: Academics use them when trying to explain why a research project deserves funding. Ad agencies craft them to win client work or, in some cases, to convince a client to not fire the creative team. (Hello, Bud Light advisors, are you paying attention.) Real estate professionals created them to show to high net worth individuals. The objective is to close a deal for one of those bizarro vacant mansions shown by YouTube explorers. See, for instance, this white elephant lovingly presented by Dark Explorations. And there are more pitch deck applications. That’s why the phrase, “Death by PowerPoint is real”, is semi poignant.

What if a pitch deck could be made better? What is pitch decks could be produced quickly? What if pitch decks could be graphically enhanced without fooling around with Fiverr.com artists in Armenia or the professionals with orange and blue hair?

The Fortune article states: The study [funded by Clarify Capital] revealed that machine-generated pitch decks consistently outperformed their human counterparts in terms of quality, thoroughness, and clarity. A staggering 80% of respondents found the GPT-4 decks compelling, while only 39% felt the same way about the human-created decks. [Emphasis added]

The cited article continues:

What’s more, GPT-4-presented ventures were twice as convincing to investors and business owners compared to those backed by human-made pitch decks. In an even more astonishing revelation, GPT-4 proved to be more successful in securing funding in the creative industries than in the tech industry, defying assumptions that machine learning could not match human creativity due to its lack of life experience and emotions. [Emphasis added]

6 10 grad at kfc

Would you like regular or crispy? asks the MBA who wants to write pitch decks for a VC firm whose managing director his father knows. The image emerged from the murky math of MidJourney. Better, faster, and cheaper than a contractor I might add.

Here’s a link to the KFC.com Web site. Smart software works better, faster, and cheaper. But it has a drawback: At this time, the KFC professional is needed to put those thighs in the fryer.

Stephen E Arnold, June 12, 2023


Moral Decline? Nah, Just Your Perception at Work

June 12, 2023

Here’s a graph from the academic paper “The Illusion of Moral Decline.”

image

Is it even necessary to read the complete paper after studying the illustration? Of course not. Nevertheless, let’s look at a couple of statements in the write up to get ready for that in-class, blank bluebook semester examination, shall we?

Statement 1 from the write up:

… objective indicators of immorality have decreased significantly over the last few centuries.

Well, there you go. That’s clear. Imagine what life was like before modern day morality kicked in.

Statement 2 from the write up:

… we suggest that one of them has to do with the fact that when two well-established psychological phenomena work in tandem, they can produce an illusion of moral decline.

Okay. Illusion. This morning I drove past people sleeping under an overpass. A police vehicle with lights and siren blaring raced past me as I drove to the gym (a gym which is no longer open 24×7 due to safety concerns). I listened to a report about people struggling amidst the flood water in Ukraine. In short, a typical morning in rural Kentucky. Oh, I forgot to mention the gunfire, I could hear as I walked my dog at a local park. I hope it was squirrel hunters but in this area who knows?

6 8 paper published

MidJourney created this illustration of the paper’s authors celebrating the publication of their study about the illusion of immorality. The behavior is a manifestation of morality itself, and it is a testament to the importance of crystal clear graphs.

Statement 3 from the write up:

Participants in the foregoing studies believed that morality has declined, and they believed this in every decade and in every nation we studied….About all these things, they were almost certainly mistaken.

My take on the study includes these perceptions (yours hopefully will be more informed than mine):

  1. The influence of social media gets slight attention
  2. Large-scale immoral actions get little attention. I am tempted to list examples, but I am afraid of legal eagles and aggrieved academics with time on their hands.
  3. The impact of intentionally weaponized information on behavior in the US and other nation states which provide an infrastructure suitable to permit wide use of digitally-enabled content.

In order to avoid problems, I will list some common and proper nouns or phrases and invite you think about these in terms of the glory word “morality”. Have fun with your mental gymnastics:

  • Catholic priests and children
  • Covid information and pharmaceutical companies
  • Epstein, Andrew, and MIT
  • Special operation and elementary school children
  • Sudan and minerals
  • US politicians’ campaign promises.

Wasn’t that fun? I did not have to mention social media, self harm, people between the ages of 10 and 16, and statements like “Senator, thank you for that question…”

I would not do well with a written test watched by attentive journal authors. By the way, isn’t perception reality?

Stephen E Arnold, June 12, 2023

Google: FUD Embedded in the Glacier Strategy

June 9, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Fly to Alaska. Stand on a glacier and let the guide explains the glacier moves, just slowly. That’s the Google smart software strategy in a nutshell. Under Code Red or Red Alert or “My goodness, Microsoft is getting media attention for something other than lousy code and security services. We have to do something sort of quickly.”

One facet of the game plan is to roll out a bit of FUD or fear, uncertainty, and doubt. That will send chills to some interesting places, won’t it. You can see this in action in the article “Exclusive: Google Lays Out Its Vision for Securing AI.” Feel the fear because AI will kill humanoids unless… unless you rely on Googzilla. This is the only creature capable of stopping the evil that irresponsible smart software will unleash upon you, everyone, maybe your dog too.

6 9 fireball of doom

The manager of strategy says, “I think the fireball of AI security doom is going to smash us.” The top dog says, “I know. Google will save us.” Note to image trolls: This outstanding illustration was generated in a nonce by MidJourney, not an under-compensated creator in Peru.

The write up says:

Google has a new plan to help organizations apply basic security controls to their artificial intelligence systems and protect them from a new wave of cyber threats.

Note the word “plan”; that is, the here and now equivalent of vaporware or stuff that can be written about and issued as “real news.” The guts of the Google PR is that Google has six easy steps for its valued users to take. Each step brings that user closer to the thumping heart of Googzilla; to wit:

  • Assess what existing security controls can be easily extended to new AI systems, such as data encryption;
  • Expand existing threat intelligence research to also include specific threats targeting AI systems;
  • Adopt automation into the company’s cyber defenses to quickly respond to any anomalous activity targeting AI systems;
  • Conduct regular reviews of the security measures in place around AI models;
  • Constantly test the security of these AI systems through so-called penetration tests and make changes based on those findings;
  • And, lastly, build a team that understands AI-related risks to help figure out where AI risk should sit in an organization’s overall strategy to mitigate business risks.

Does this sound like Mandiant-type consulting backed up by Google’s cloud goodness? It should because when one drinks Google juice, one gains Google powers over evil and also Google’s competitors. Google’s glacier strategy is advancing… slowly.

Stephen E Arnold, June 9, 2023

Microsoft Code: Works Great. Just Like Bing AI

June 9, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

For Windows users struggling with certain apps, help is not on the way anytime soon. In fact, reports TechRadar, “Windows 11 Is So Broken that Even Microsoft Can’t Fix It.” The issues started popping up for some users of Windows 11 and Windows 10 in January and seem to coincide with damaged registry keys. For now the company’s advice sounds deceptively simple: ditch its buggy software. Not a great look. Writer Matt Hanson tells us:

“On Microsoft’s ‘Health’ webpage regarding the issue, Microsoft notes that the ‘Windows search, and Universal Windows Platform (UWP) apps might not work as expected or might have issues opening,’ and in a recent update it has provided a workaround for the problem. Not only is the lack of a definitive fix disappointing, but the workaround isn’t great, with Microsoft stating that to ‘mitigate this issue, you can uninstall apps which integrate with Windows, Microsoft Office, Microsoft Outlook or Outlook Calendar.’ Essentially, it seems like Microsoft is admitting that it’s as baffled as us by the problem, and that the only way to avoid the issue is to start uninstalling apps. That’s pretty poor, especially as Microsoft doesn’t list the apps that are causing the issue, just that they integrate with ‘Windows, Microsoft Office, Microsoft Outlook or Outlook Calendar,’ which doesn’t narrow it down at all. It’s also not a great solution for people who depend on any of the apps causing the issue, as uninstalling them may not be a viable option.”

The write-up notes Microsoft says it is still working on these issues. Will it release a fix before most users have installed competing programs or, perhaps, even a different OS? Or maybe Windows 11 snafus are just what is needed to distract people from certain issues related to the security of Microsoft’s enterprise software. Will these code faults surface (no pun intended) in Microsoft’s smart software. Of course not. Marketing makes software better.

Cynthia Murrell, June 9, 2023

AI: Immature and a Bit Unpredictable

June 9, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Writers, artists, programmers, other creative professionals, and workers with potentially automated jobs are worried that AI algorithms are going to replace them. ChatGPT is making headlines about its universality with automating tasks and writing all web content. While ChatGPT cannot write succinct Shakespearean drama yet, it can draft a decent cover letter. Vice News explains why we do not need to fear the AI apocalypse yet: “Scary ‘Emergent’ AI Abilities Are Just A ‘Mirage’ Produced By Researchers, Stanford Study Says.”

3 june baby

Responsible adults — one works at Google and the other at Microsoft — don’t know what to do with their unhappy baby named AI. The image is a product of the MidJourney system which Getty Images may not find as amusing as I do.

Stanford researchers wrote a paper where they claim “that so-called “emergent abilities” in AI models—when a large model suddenly displays an ability it ostensibly was not designed to possess—are actually a “mirage” produced by researchers.” Technology leaders, such as Google CEO Sundar Pichai, perpetuate that large language model AI and Google Bard are teaching themselves skills not in their initial training programs. For example, Google Bard can translate Bengali and Chat GPT-4 can solve complex tasks without special assistance. Neither AI had relevant information included in their training datasets to reference.

When technology leaders tell the public about these AI, news outlets automatically perpetuate doomsday scenarios, while businesses want to exploit them for profit. The Stanford study explains that different AI developers measure outcomes differently and also believe smaller AI models are incapable of solving complex problems. The researchers also claim that AI experts make overblown claims, likely for investments or notoriety. The Stanford researchers encourage their brethren to be more realistic:

“The authors conclude the paper by encouraging other researchers to look at tasks and metrics distinctly, consider the metric’s effect on the error rate, and that the better-suited metric may be different from the automated one. The paper also suggests that other researchers take a step back from being overeager about the abilities of large language models. ‘When making claims about capabilities of large models, including proper controls is critical,” the authors wrote in the paper.’”

It would be awesome if news outlets and the technology experts told the world that an AI takeover is still decades away? Nope, the baby AI wants cash, fame, a clean diaper, and a warm bottle… now.

Whitney Grace, June 9, 2023

OpenAI: Someone, Maybe the UN? Take Action Before We Sign Up More Users

June 8, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]_thumb_thumb_thumbNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I wrote about Sam AI-man’s use of language my humanoid-written essay “Regulate Does Not Mean Regulate. Leave the EU Does Not Mean Leave the EU. Got That?” Now the vocabulary of Mr. AI-man has been enriched. For a recent example, please, navigate to “OpenAI CEO Suggests International Agency Like UN’s Nuclear Watchdog Could Oversee AI.” I am loath to quote from an AP (once an “associated press”) due to the current entity’s policy related to citing their “real news.”

In the allegedly accurate “real news” story, I learned that Mr. AI-man has floated the idea for a United Nation’s agency to oversee global smart software. Now that is an idea worthy of a college dorm room discussion at Johns Hopkins University’s School of Advanced International Studies in always-intellectually sharp Washington, DC.

6 8 bureaucrats

UN Representative #1: What exactly is artificial intelligence? UN Representative #2. How can we leverage it for fund raising? UN Representative # 3. Does anyone have an idea how we could use smart software to influence our friends in certain difficult nation states? UN Representative #4. Is it time for lunch? Illustration crafted with imagination, love, and care by MidJourney.

The model, as I understand the “real news” story is that the UN would be the guard dog for bad applications of smart software. Mr. AI-man’s example of UN effectiveness is the entity’s involvement in nuclear power. (How is that working out in Iran?) The write up also references the notion of guard rails. (Are there guard rails on other interesting technology; for example, Instagram’s somewhat relaxed approach to certain information related to youth?)

If we put the “make sure we come together as a globe” statement in the context of Sam AI-man’s other terminology, I wonder if PR and looking good is more important than generating traction and revenue from OpenAI’s innovations.

Of course not. The UN can do it. How about those UN peace keeping actions in Africa? Complete success from Mr. AI-man’s point of view.

Stephen E Arnold, June 8, 2023, 929 am US Eastern

Japan and Copyright: Pragmatic and Realistic

June 8, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I read “Japan Goes All In: Copyright Doesn’t Apply To AI Training.” In a nutshell, Japan’s alleged stance is accompanied with a message for “creators”: Tough luck.

6 1 ripping off my content

You are ripping off my content. I don’t think that is fair. I am a creator. The image of a testy office lady is the product of MidJourney’s derivative capabilities.

The write up asserts:

It seems Japan’s stance is clear – if the West uses Japanese culture for AI training, Western literary resources should also be available for Japanese AI. On a global scale, Japan’s move adds a twist to the regulation debate. Current discussions have focused on a “rogue nation” scenario where a less developed country might disregard a global framework to gain an advantage. But with Japan, we see a different dynamic. The world’s third-largest economy is saying it won’t hinder AI research and development. Plus, it’s prepared to leverage this new technology to compete directly with the West.

If this is the direction in which Japan is heading, what’s the posture in China, Viet-Nam and other countries in the region? How can the US regulate for an unknown future? We know Japan’s approach it seems.

Stephen E Arnold, June 8, 2023

How Does One Train Smart Software?

June 8, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

It is awesome when geekery collides with the real world, such as the development of AI. These geekery hints prove that fans are everywhere and the influence of fictional worlds leave a lasting impact. Usually these hints are naming a new discovery after a favorite character or franchise, but it might not be good for copyrighted books beloved by geeks everywhere. The New Scientist reports that “ChatGPT Seems To Be Trained On Copyrighted Books Like Harry Potter.”

In order to train AI models, AI developers need large language models or datasets. Datasets can range from information on social media platforms to shopping databases like Amazon. The problem with ChatGPT is that it appears its developers at OpenAI used copyrighted books as language models. If OpenAI used copyrighted materials it brings into question if the datasets were legality created.

Associate Professor David Bamman of the University of California, Berkley campus, and his team studied ChatGPT. They hypothesized that OpenAI used copyrighted material. Using 600 fiction books from 1924-2020, Bamman and his team selected 100 passages from each book that ha a single, named character. The name was blanked out of the passages, then ChatGPT was asked to fill them. ChatGPT had a 98% accuracy rate with books ranging from J.K. Rowling, Ray Bradbury, Lewis Carroll, and George R.R. Martin.

If ChatGPT is only being trained from these books, does it violate copyright?

“ ‘The legal issues are a bit complicated,’ says Andres Guadamuz at the University of Sussex, UK. ‘OpenAI is training GPT with online works that can include large numbers of legitimate quotes from all over the internet, as well as possible pirated copies.’ But these AIs don’t produce an exact duplicate of a text in the same way as a photocopier, which is a clearer example of copyright infringement. ‘ChatGPT can recite parts of a book because it has seen it thousands of times,’ says Guadamuz. ‘The model consists of statistical frequency of words. It’s not reproduction in the copyright sense.’”

Individual countries will need to determine dataset rules, but it is preferential to notify authors their material is being used. Fiascos are already happening with stolen AI generated art.

ChatGPT was mostly trained on science fiction novels, while it did not read fiction from minority authors like Toni Morrison. Bamman said ChatGPT is lacking representation. That his one way to describe the datasets, but it more likely pertains to the human  AI developers reading tastes. I assume there was little interest in books about ethics, moral behavior, and the old-fashioned William James’s view of right and wrong. I think I assume correctly.

Whitney Grace, June 8, 2023

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta