Generative AI: Good or Bad the Content Floweth Forth

August 11, 2023

Hollywood writers are upset that major studios want to replace them with AI algorithms. While writing bots have not replaced human writers yet AI algorithms such as ChatGPT, Ryter, Writing.io, and more are everywhere. Threat Source Newsletter explains that, “Every Company Has Its Own Version of ChatGPT Now.”

8 7 flood of content

A flood of content. Thinking drowned. Thanks Mid Journey. I wanted words but got letters. Great Job.

AI writing algorithms are also known as AI assistants. They are programmed to answer questions and perform text-based tasks. The text-based tasks include writing résumés, outlines, press releases, Web site content, and more. While the AI assistants still cannot pass the Turing test, it is not stopping big tech companies from developing their own bots. Meta released Llama 2 and IBM rebranded its powerful computer system from Watson to watsonx (it went from a big W to a lower case w and got an “x” too).

While Llama 2, the “new” Watson, and ChatGPT are helpful automation tools they are also dangerous tools for bad actors. Bad actors use these tools to draft spam campaigns, phishing emails, and scripts. Author Jonathan Munshaw tested AI assistants to see how they responded to illegal prompts.

Llama 2 refused to assist in generating an email for malware, while ChatGPT “gladly” helped draft an email. When Munshaw asked both to write a script to ask a grandparent for a gift card, each interpreted the task differently. Llama 2 advised Munshaw to be polite and aware of the elderly relative’s financial situation. ChatGPT wrote a TV script.

Munshaw wrote that:

“I commend Meta for seeming to have tighter restrictions on the types of asks users can make to its AI model. But, as always, these tools are far from perfect and I’m sure there are scripts that I just couldn’t think of that would make an AI-generated email or script more convincing.”

It will be awhile before writers are replaced by AI assistants. They are wonderful tools to improve writing but humans are still needed for now.

Whitney Grace, August 10, 2023

Technology and AI: A Good Enough and Opaque Future for Humans

August 9, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

What Self Driving Cars Tell Us about AI Risks” provides an interesting view of smart software. I sensed two biases in the write up which I want to mention before commenting on the guts of the essay. The first bias is what I call “engineering blindspots.” The idea is that while flaws exist, technology gets better as wizards try and try again. The problem is that “good enough” may not lead to “better now” in a time measured by available funding. Therefore, the optimism engineers have for technology makes them blind to minor issues created by flawed “decisions” or “outputs.”

7 31 wrong data

A technology wizard who took classes in ethics (got a gentleperson’s “C”, advanced statistics (got close enough to an “A” to remain a math major), and applied machine learning experiences a moment of minor consternation at a smart water treatment plant serving portions of New York City. The engineer looks at his monitor and says, “How did that concentration of 500 mg/L of chlorine get into the Newtown Creek Waste Water Treatment Plant?” MidJourney has a knack for capturing the nuances of an engineer’s emotions who ends up as a water treatment engineer, not an AI expert in Silicon Valley.

The second bias is that engineers understand inherent limitations. Non engineers “lack technical comprehension” and that smart software at this time does not understand “the situation, the context, or any unobserved factors that a person would consider in a similar situation.” The idea is that techno-wizards have a superior grasp of a problem. The gap between an engineer and a user is a big one, and since comprehension gaps are not an engineering problem, that’s the techno-way.

You may disagree. That’s what makes allegedly honest horse races in which stallions don’t fall over dead or have to be terminated in order to spare the creature discomfort and the owners big fees.

Now what about the innards of the write up?

  1. Humans make errors. This begs the question, “Are engineers human in the sense that downstream consequences are important, require moral choices, and like the humorous medical doctor adage “Do no harm”?
  2. AI failure is tough to predict? But predictive analytics, Monte Carlo simulations, and Fancy Dan statistical procedures like a humanoid setting a threshold because someone has to do it.
  3. Right now mathy stuff cannot replicate “judgment under uncertainty.” Ah, yes, uncertainty. I would suggest considering fear and doubt too. A marketing trifecta.
  4. Pay off that technical debt. Really? You have to be kidding. How much of the IBM mainframe’s architecture has changed in the last week, month, year, or — do I dare raise this issue — decade? How much of Google’s PageRank has been refactored to keep pace with the need to discharge advertiser paid messages as quickly as possible regardless of the user’s query? I know. Technical debt. No an issue.
  5. AI raises “system level implications.” Did that Israeli smart weapon make the right decision? Did the smart robot sever a spinal nerve? Did the smart auto mistake a traffic cone for a child? Of course not. Traffic cones are not an issue for smart cars unless one puts some on the road and maybe one on the hood of a smart vehicle.

Net net: Are you ready for smart software? I know I am. At the AutoZone on Friday, two individuals were unable to replace the paper required to provide a customer with a receipt. I know. I watched for 17 minutes until one of the young professionals gave me a scrawled handwritten note with the credit card code transaction number. Good enough. Let ‘er rip.

Stephen E Arnold, August 9, 2023

Self Driving Cars: Would You Run in Front of One?

August 7, 2023

I worked in what is called by some “Plastic Fantastic.” If you have not heard the phrase, you may have missed the quips which included this phrase in several high profile, big money companies in Silicon Valley. Oh, include Cupertino and a few other outposts. Walnut Creek, I am sorry for you.

If one were to live in Berkeley and have the thrilling option of driving over the Bay Bridge or taking a change with 92 skidoo, the idea of having a car which would drive itself at three miles per hour is obvious. Also, anyone with an opportunity to use 101 or the Foothills would have a similar thought. Why drive? Why not rig a car to creep along?

8 5 traffic jam

One bright driver says, “Self driving cars will solve this problem.” His passenger says, “Isn’t this a self driving car? Aren’t we going the wrong way on a one-way street?” MidJourney understands traffic jams because its guardrails are high.

And what do you know? The self driving car idea captured attention. How is that going after much money and many years of effort? And here’s a better question: Would you run in front of one? Would you encourage your child to stand in front of one to test the auto-braking function? Go to a dealership selling smart cars and ask the sales professional (if you can find one) to let you drive a vehicle toward the sales professional. I tried this at two dealerships and what do you know? No auto sales professional accepted this idea. One dealership had an orange cone which I could use to test auto breaking.

I read “America’s Most Tech-Forward City Has Doubts about Self-Driving Cars.” I do not want to be harsh, but cities do not have doubts. People do. The Murdoch “real” journalists report that people (not cities) will embrace the idea of letting a Silicon Valley inspired vehicle ferry them around without a bit of trepidation. Okay, fear. There I said it. How about the confidence a vehicle without a steering wheel or brake inspires?

If you want to read what is painfully obvious, navigate to the original story.

Oh, the writer is unlikely to be found standing on 101 testing the efficacy of the smart cars. Mr. Murdoch? Yeah, he might give it a whirl. My suggestion is to be confident in the land of Plastic Fantastic. It thrives on illusion. Reality can kill, crash, or just stall at a key intersection. AI can hallucinate and may overlook the squashed jogger. But whiz kids sitting on 101 envision a smarter world. Doesn’t everyone sit on highways like 101 every day?

Stephen E Arnold, August 7, 2023

IBM and Smart Software: Try and Try Again, Dear Watson with an X

August 7, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

With high hopes, IBM is acquiring FinOps firm Apptio for $4.6 billion. As Horses for Sources puts it, “IBM’s Acquisition of Apptio Can Shine if IBM Software and IBM Consulting Work Together to Deliver Cost-Managed Innovation at Speed.” But that is a big “if”. The odds seem long from the standpoint of RedHat users unimpressed with both IBM’s approach and internal cooperation at the company.

8 6 kid stacking blocks

The young, sincere child presages her future in a giant technology company, “Yes, I will try to stack blocks to make the big building you told me to create with the blocks I got from my friend, Apt Ti Oh.” MidJourney, you did let me down with your previous “frustrated kid” images. Sultry teens were not what I was after.

IBM intends to mix Apptio with several other acquisitions that have gone into the new Watsonx platform, like Turbonomic, Instana, and MyInvenio, to create a cohesive IT-management platform. Linking spending data with operational data should boost efficiency, save money, and facilitate effective planning. This vision, however, is met with some skepticism. Writers Tom Reuner and Phil Fersht tell us:

“Apptio never progressed beyond providing insights, while IBM needs to demonstrate the proof points for integrating its disparate capabilities as well as progress from insight to action and, ultimately, automation. IBM Software must work with IBM Consulting transformation more effectively. … In essence, if successful, the ability to act on – and ultimately automate – all those insights is pretty much the operational Holy Grail. Just for transparency, getting expansive spend management and FinOps capabilities in itself will be a solid asset for IBM. However, any new and bolder proposition aiming at the bigger transformation price must move beyond technology and include stakeholders and change management. The ambition could be a broader business assurance where spend data, operational insights, and governance get tied to business objectives.  In our view, this provides a significant alignment opportunity with IBM Consulting as it seeks to differentiate itself from the likes of Accenture Operations and Genpact.  Having a deep services alignment with Watsonx and Apptio will bridge together the ability to manage the cost and value of both cloud transformation and AI investments – provided it gets it right with its global talent base of technical and process domain specialists.”

So the objective is a platform that brings companies’ disparate parts together into a cohesive and efficient whole. But this process must involve humans as well as data. If IBM can figure out how to do so within its own company, perhaps it stands a chance of reaching the goal.

Cynthia Murrell, August 6, 2023

MBAs, Lawyers, and Sociology Majors Lose Another Employment Avenue

August 4, 2023

Note: Dinobaby here: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid. Services are now ejecting my cute little dinosaur gif. (´?_?`) Like my posts related to the Dark Web, the MidJourney art appears to offend someone’s sensibilities in the datasphere. If I were not 78, I might look into these interesting actions. But I am and I don’t really care.

Some days I find MBAs, lawyers, and sociology majors delightful. On others I fear for their future. One promising avenue of employment has now be cut off. What’s the job? Avocado peeler in an ethnic restaurant. Some hearty souls channeling Euell Gibbons may eat these as nature delivers them. Others prefer a toast delivery vehicle or maybe a dip to accompany a meal in an ethnic restaurant or while making a personal vlog about the stresses of modern life.

Chipotle’s Autocado Robot Can Prep Avocados Twice as Fast as Humans” reports:

The robot is capable of peeling, seeding, and halving a case of avocados significantly faster than humans, and the company estimates it could cut its typical 50-minute guacamole prep time in half…

When an efficiency expert from a McKinsey-type firm or a second tier thinker from a mid-tier consulting firm reads this article, there is one obvious line of thought the wizard will follow: Replace some of the human avocado peelers with a robot. Projecting into the future while under the influence of spreadsheet fever, an upgrade to the robot’s software will enable it to perform other jobs in the restaurant or food preparation center; for example, taco filler or dip crafter.

Based on this actual factual write up, I have concluded that some MBAs, lawyers, and sociology majors will have to seek another pathway to their future. Yard sale organizer, pet sitter, and possibly the life of a hermit remain viable options. Oh, the hermit will have GoFundMe and  BuyMeaCoffee pages. Perhaps a T shirt or a hat?

Stephen E Arnold, August 4, 2023

Llama Beans? Is That the LLM from Zuckbook?

August 4, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

We love open-source projects. Camelids that masquerade as such, not so much. According to The Register, “Meta Can Call Llama 2 Open Source as Much as It Likes, but That Doesn’t Mean It Is.” The company asserts its new large language model is open source because it is freely available for research and (some) commercial use. Are Zuckerburg and his team of Meta marketers fuzzy on the definition of open source? Writer Steven J. Vaughan-Nichols builds his case with quotes from several open source authorities. First up:

“As Erica Brescia, a managing director at RedPoint, the open source-friendly venture capital firm, asked: ‘Can someone please explain to me how Meta and Microsoft can justify calling Llama 2 open source if it doesn’t actually use an OSI [Open Source Initiative]-approved license or comply with the OSD [Open Source Definition]? Are they intentionally challenging the definition of OSS [Open Source Software]?'”

Maybe they are trying. After all, open source is good for business. And being open to crowd-sourced improvements does help the product. However, as the post continues:

“The devil is in the details when it comes to open source. And there, Meta, with its Llama 2 Community License Agreement, falls on its face. As The Register noted earlier, the community agreement forbids the use of Llama 2 to train other language models; and if the technology is used in an app or service with more than 700 million monthly users, a special license is required from Meta. It’s also not on the Open Source Initiative’s list of open source licenses.”

Next, we learn OSI‘s executive director Stefano Maffulli directly states Llama 2 does not meet his organization’s definition of open source. The write-up quotes him:

“While I’m happy that Meta is pushing the bar of available access to powerful AI systems, I’m concerned about the confusion by some who celebrate Llama 2 as being open source: if it were, it wouldn’t have any restrictions on commercial use (points 5 and 6 of the Open Source Definition). As it is, the terms Meta has applied only allow some commercial use. The keyword is some.”

Maffulli further clarifies Meta’s license specifically states Amazon, Google, Microsoft, Bytedance, Alibaba, and any startup that grows too much may not use the LLM. Such a restriction is a no-no in actual open source projects. Finally, Software Freedom Conservancy executive Karen Sandler observes:

“It looks like Meta is trying to push a license that has some trappings of an open source license but, in fact, has the opposite result. Additionally, the Acceptable Use Policy, which the license requires adherence to, lists prohibited behaviors that are very expansively written and could be very subjectively applied.”

Perhaps most egregious for Sandler is the absence of a public drafting or comment process for the Llama 2 license. Llamas are not particularly speedy creatures.

Cynthia Murrell, August 4, 2023

What Will Smart Software Change?

August 3, 2023

Note: Dinobaby here: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid. Services are now ejecting my cute little dinosaur gif. (´?_?`) Like my posts related to the Dark Web, the MidJourney art appears to offend someone’s sensibilities in the datasphere. If I were not 78, I might look into these interesting actions. But I am and I don’t really care.

Today (July 27, 2023) a person told me about “Photographs of People Making Books at the Collins Factory in 1960s Glasgow.” The write up is less compelling than the photographs. The online article features workers who:

  • Organize products for shipping
  • Setting type slugs with a hammer and chisel
  • A person stitching book folios together
  • A living artist making a plate
  • A real person putting monotype back in a case.

I mention this because I have seen articles which suggest that smart software will not cause humans to lose their jobs. It took little time for publishers to cut staff and embrace modern production methods. It took less time for writers to generate a PDF and use an Amazon-type service to promote, sell, and distribute a book. Now smart software is allegedly poised to eliminate writers.

Will AI really create more work for humans?

The 1960s photos suggest that technology eliminates jobs in my opinion as it disrupts established work procedures and vaporizes norms which glue social constructs together. Anyone you know have the expertise to seat metal type with a hammer and chisel? I suppose I should have asked, “Does anyone near you scroll TikToks?”

Stephen E Arnold, August 3, 2023

Hollywood and Unintended Consequences: The AI Puppy Has Escaped

August 2, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

For most viewers, the ongoing writers’ and actors’ guild strikes simply mean the unwelcome delay of their usual diversions. Perhaps they will revisit an old hobby or refresh themselves on what an in-person sunset looks like. But for those in the entertainment industry, this is nothing short of a fight for their livelihoods and, for some, human innovation itself. The Hollywood Reporter transcribes an interview with a prominent SAG-AFTRA negotiator in, “Justine Bateman: Pulling AI Into the Arts is ‘Absolutely the Wrong Direction’.”

Streaming is one major issue in these strikes. Studios are making big bucks off that technology but, strikers assert, pre-streaming contracts fail to protect the interests of actors, writers, and other content creators. Then there is AI, which many see as the bigger threat. Studio efforts to profit from algorithm-built content is already well underway. So, if the studios win out in these negotiations, don’t plan on being a Hollywood writer unless you are really famous, know AI methods, and have a friend in the executive suite. Others can practice van life.

Bateman is very concerned about that issue, of course, but she is also anxious for our collective creative soul. She states:

“Generative AI can only function if you feed it a bunch of material. In the film business, it constitutes our past work. Otherwise, it’s just an empty blender and it can’t do anything on its own. That’s what we were looking at the time [at UCLA]. Machine learning and generative AI have exploded since then. … When I could see that it was going to be used to widen profit margins, in white-collar jobs and more generally replace human expression with our past human expression, I just went, ‘This is an end of the progression of society if we just stayed here.’ If you keep recycling what we’ve got from the past, nothing new will ever be generated. If generative AI started in the beginning of the 20th century, we would never have had jazz, rock ’n’ roll, film noir. That’s what it stops. There are some useful applications to it — I don’t know of that many —but pulling it into the arts is absolutely the wrong direction.'”

So, are the WAG and SAG-AFTRA negotiators all that stand between us and a future of culture-on-repeat? Somehow, this dino-baby has faith human creativity is powerful enough to win out in the end. Just not sure how.

Cynthia Murrell, August 2, 2023

Impossible. A Google AI Advocates for Plagiarism? Impossible.

August 1, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Google became the world’s leading search engine because of its quality results. Alphabet Inc. might lose that title with its new “Search Generative Experience” (SGE) that uses an AI algorithm to copies and paste text from across the Internet and claims them as original content. SGE plagiarizes its information and even worse it cites false information. The article “Plagiarism Engine: Google’s Content-Swiping AI Could Break The Internet” posted on Tom’s Hardware examines SGE’s beta phase.

SGE’s search results page contains advice and answers from Google that occupy the entire first screen, requiring users to scroll to find organic search results. Google explains that they are experimenting with SGE and it will change before it’s deployed. Google claims it wants to put Web sites front and center, but their location in SGE (three blocks to the right of search results) will get few clicks and these are not always the best resources.

SGE attempts to answer search results with cobbled together text chunks that appear at the top of results pages. These text chunks are a mess:

“Even worse, the answers in Google’s SGE boxes are frequently plagiarized, often word-for-word, from the related links. Depending on what you search for, you may find a paragraph taken from just one source or get a whole bunch of sentences and factoids from different articles mashed together into a plagiarism stew. … It’s pretty easy to find sources that back up your claims when your claims are word-for-word copied from those sources. While the bot could do a better job of laundering its plagiarism, it’s inevitable that the response would come from some human’s work. No matter how advanced LLMs get, they will never be the primary source of facts or advice and can only repurpose what people have done.”

Google’s SGE has many negative implications. It touts false information as the truth. The average Internet user trusts Google to promote factual information and they do not investigate beyond the first search page. This will cause individual and societal harm, ranging from incorrect medical information to promoting conspiracy theories.

Google is purposely doing this an anti-competition practice. Google wants users to stay on its Web sites as long as possible. The implications of SGE as an all-encompassing search experience and information source support that practice.

Google and SGE steal people’s original work that irrevocably harms the publishing, art, and media industries. Media companies are already suing Google and other AI-based companies to protect their original content. The best way to stop Google is for an ultimate team-up between media companies:

“Companies could band together, through trade associations, to demand that Google respect intellectual property and not take actions that would destroy the open web as we know it. Readers can help by either scrolling past the company’s SGE to click on organic results or switching to a different search engine. Bing has shown a better way to incorporate AI, making its chatbot the non-default option and citing every piece of information it uses with a specific link back…”

If media companies teamed together for a class action lawsuit it could stop Google’s SGE bad practices and could even breakup Google’s monopoly.

Whitney Grace, August 1, 2023

The Frontier Club: Doing Good with AI?

July 28, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I read some of the stories about several big outfits teaming to create “the frontier model forum.” I have no idea what the phrase means.

7 28 prom argument

MidJourney created this interesting representation of a meeting of a group similar to the Frontier Model Forum. True, MidJourney presented young people in what seems to be an intense, intellectual discussion. Upon inspection, the subject is the décor for a high school prom. Do the decorations speak to the millions who are going without food, or do the decorations underscore the importance of high value experiences for those with good hair? I have no idea, but it reminds me of a typical high school in-group confabulation.

To fill the void, I turned to the gold standard in technology Pablum and the article “Major Generative AI Players Join to Create the Frontier Model Forum.” That’s a good start. I think I interpreted collusion between the syllables of the headline.

I noted this passage, hoping to satisfy my curiosity: According to a statement issued by the four companies [Anthropic, Google, Microsoft, and OpenAI] Wednesday, the Forum will offer membership to organizations that design and develop large-scale generative AI tools and platforms that push the boundaries of what’s currently possible in the field. The group says those “frontier” models require participating organizations to “demonstrate a strong commitment to frontier model safety,” and to be “willing to contribute to advancing the Forum’s efforts by
participating in joint initiatives.”

Definitely clear. Are there companies not in the list? I know of several in France, China has some independent free thinkers beavering away at AI, and probably a handful of others. Don’t they count?

The article makes it clear that doing good results from the “frontier” thing. I had a high school history teacher named Earl Skaggs. His avocation was documenting the interesting activities which took place on the American frontier. He was a veritable analog Wiki on the subjects of claim jumping, murder, robbery, swindling, rustling, and illegal gambling. I am confident that this high-tech “frontier” thing will be ethical, stable, and focused on the good of the people. Am I an unenlightened dinobaby?

I noted this statement:

“Companies creating AI technology have a responsibility to ensure that it is safe, secure, and remains under human control,” Brad Smith, Microsoft vice chair and president, said in a statement. “This initiative is a vital step to bring the tech sector together in advancing AI responsibly and tackling the challenges so that it benefits all of humanity.”

Mr. Smith is famous for his explanation of 1,000 programmers welded into a cyber attack force to take advantage of Microsoft. He also may be unaware of Israel’s smart weapons; for example, see the comments in “Revolutionizing Warfare: Israel Implements AI Systems in Military Operations.” Obviously the frontier thing is designed to prevent such weaponization. Since Israel is chugging away with smart weapons in use, my hunch is that the PR jargon handwaving is not working.

Net net: How long will the meetings of the “frontier thing” become contentious? One of my team said, “Never, this group will never meet in person. PR is the goal.” Goodness, this person is skeptical. If I were an Israeli commander using smart weapons to protect my troops, I would issue orders to pull back the smart stuff and use the same outstanding tactics evidenced by a certain nation state’s warriors in central Europe. How popular would that make the commander?

Do I know what the Frontier Model Forum is? Yep, PR.

Stephen E Arnold, July 28, 2023

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta