Refining Open: The AI Weak Spot during a Gold Rush

July 13, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Nope, no reference will I make to sell picks and denim pants to those involved in a gold rush. I do want to highlight the essay “AI Weights Are Not Open Source.” There is a nifty chart with rows and columns setting forth some conceptual facets of smart software. Please, navigate to the cited document so you can read the text in the rows and columns.

For me, the most important sentence in the essay in my opinion is this one:

Many AI weights with the label “open” are not open source.

How are these “weights” determined or contrived? Are these weights derived by proprietary systems and methods? Are these weights assigned by a subject matter expert, a software engineer using guess-timation, or are low wage workers pressed against the task?

The answers to these questions reveal how models are configured to generate “good enough” results. Present models are prone to providing incomplete, incorrect, or pastiche information.

Furthermore, the popularity of obtaining images of Mr. Trump in an orange jumpsuit illustrates how “censorship” is applied to certain requests for information. Try it yourself. Navigate to MidJourney. Jump through the Discord hoops. Input the command “President Donald Trump in an orange jumpsuit.” Get the improper request flag. Then ask yourself, “How does BoingBoing keep creating Mr. Trump in an orange jumpsuit?”

Net net: The power of AI rests with the weights and controls which allow certain information and disallows other types of information. “Open” does not mean open like “the door is open.” Open for AI means a means to obtain power and exert control in my opinion.

Stephen E Arnold, July 13, 2023

Business Precepts for Silicon Valley: Shouting at the Grand Canyon?

July 13, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I love people with enthusiasm and passion. What’s important about these two qualities is that they often act like little dumpsters at the Grand Canyon. People put a range of discarded items into them, and hard-working contractors remove the contents and dump them in a landfill. (I hear some countries no longer accept trash from the US. Wow. Imagine that.)

During one visit many years ago with the late industrial photographer John C Evans, we visited the Grand Canyon. We were visiting uranium mines and snapping pictures for a client. I don’t snap anything; I used to get paid to be in charge of said image making. I know. Quite a responsibility. I did know enough not to visit the uranium mine face. The photographer? Yeah, well, I did not provide too much information about dust, radiation, and the efficacy of breathing devices in 1973. Senior manager types are often prone to forgetting some details.

Back to the Grand Canyon.

There was an older person who was screaming into or at the Grand Canyon. Most visitors avoided the individual. I, however, walked over and listened to him. He was explaining that everyone had to embrace the sacred nature of the Grand Canyon and stop robbing the souls of the deceased by taking pictures. He provided other outputs about the evils of modern society, the cost of mule renting, and the prices in the “official” restaurants. Since I had no camera, he ignored me. He did yell at John C Evens, who smiled and snapped pictures.

I asked MidJourney to replicate this individual who thought the Grand Canyon, assorted unseen spirits, and the visitors were listening. Here’s what the estimable art system output:

7 11 crazy screamer

I thought of this individual when I read “Seven Rules For Internet CEOs To Avoid Enshittification.” The write up, inspired by a real journalist, surfs on the professional terminology for ruining a free service. I find the term somewhat offensive, and I am amused at the broad use the neologism has found.

The article provides what I think are similar to the precepts outlined in a revered religious book or a collection of Ogden Nash statements. Let me point out that these statements contain elements of truth and would probably reduce philosophers like A.E.O. Taylor and William James to tears of joy because of their fundamental rational empiricism. Yeah. (These fellows would have told the photographer about the visit to the uranium mine face too.)

The write up lays out a Code of Conduct for some Silicon Valley-type companies. Let me present three of the seven statements and urge you to visit the original article to internalize the precepts as a whole. You may want to consider screaming these out loud in a small group of friends or possibly visiting a local park and shouting at the pedestal where a Civil War statue once stood ignored.

Selected precept one:

Tell your investors that you’re in this for the long haul and they need to be too.

Selected precept two:

Find ways to make money that don’t undermine the community or the experience.

Selected precept three and remember there are four more in the original write up:

Never charge for what was once free.

I selected three of these utterances because each touches upon money. Investors provide money to get more money in return. Power and fame are okay, but money is the core objective. Telling investors to wait or be patient is like telling a TikTok influencer to wait, stand in line like everyone else, or calm down. Tip: That does not work. Investors want money and in a snappy manner. Goals and timelines are part of the cost of taking their money. The Golden Rule: Those with the gold rule.

The idea of giving up money for community or the undefined experience is okay as long as it does not break the Golden Rule. If it does, those providing the funding will get someone who follows the Golden Rule. The mandate to never charge for what was once free is like a one-liner at a Comedy Club. Quite a laugh because money trumps customers and the fuzzy wuzzy notion of experience.

What’s my take on these and the full listing of precepts? Think about the notion of a senior manager retaining information for self preservation. Think about the crazy person shouting rules into the Grand Canyon. Now think about how quickly certain Silicon Valley type outfits will operate in a different way? Free insight: The Grand Canyon does not listen. The trash is removed by contractors. The old person shouting eventually gets tired, goes to the elder care facility or back to van life, and the Silicon Valley steps boldly toward enshittification. That’s the term, right?

Stephen E Arnold, July 12, 2023

Understanding Reality: A Job for MuskAI

July 12, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid

I read “Elon Musk Launches His Own xAI Biz to Understand Reality.” Upon reading this article, I was immediately perturbed. The name of the company should be MuskAI (pronounced mus-key like the lovable muskox (Ovibos moschatus). This imposing and aromatic animal can tip the scales at up to 900 pounds. Take that to the cage match and watch the opposition wilt or at least scrunch up its nose.

I also wanted to interpret the xAI as AIX. IBM, discharger of dinobabies, could find that amusing. (What happens when AIX memory is corrupted? Answer: Aches in the posterior. Snort snort.)

Finally, my thoughts coalesced around the name Elon-AI, illustrated below by the affable MidJourney:

fix tesla

Bummer. Elon AI is the name of a “coin.” And the proper name Elonai means “a person who has the potential to attain spiritual enlightenment.” A natural!

The article reports:

Elon Musk is founding of his own AI company with some lofty ambitions. According to the billionaire, his xAI venture is being formed “to understand reality.” Those hoping to get a better explanation than Musk’s brief tweet by visiting xAI’s website won’t find much to help them understand what the company actually plans to do there, either.  “The goal of xAI is to understand the true nature of the universe,” xAI said of itself…

I have a number of questions. Let me ask one:

Will Elon AI go after the Zuck AI?

And another:

Will the two AIs power an unmanned fighter jet, each loaded with live ordnance?

And the must-ask:

Will the AIs attempt to kill one another?

The mano-a-mano fight in Las Vegas (maybe in the weird LED appliqued in itsy bitsy LEDs) is less interesting to me than watching two warbirds from the Dayton Air Museum gear up and dog fight.

Imagine a YouTube video, then some TikToks, and finally a Netflix original released to the few remaining old-fashioned theaters.

That’s entertainment. Sigh. I mean xAI.

Stephen E Arnold, July 12, 2023

TikTok Interface: Ignoring the Big Questions

July 10, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I read “TikTok Is Confusing by Design.” That’s correct. But the write up does not focus on the big questions. However, the article tiptoes up to the $64 question and then goes for a mocha latte. Very modern.

7 6 ignore red light

A number of articles ignore flashing red lights. William James called this “a certain blindness.” Thanks, MidJourney to a wonderful illustration crafted from who knows what.

Note these snippets from the essay:

  • a controlled experience that’s optimized to know or decide what we want and then deliver it to us.
  • You don’t get to choose from a list of related content, nor is there any real order to whatever you’ll get.
  • It’s a comfortable space to be in when you don’t have to make choices.
  • TikTok’s approach has become the new standard. Part of that standard is aggressively pushing content at you that the app has decided you want to see.

So what are the big questions? The article shoves them to the end of the essay. Will people persist and ponder them? Don’t big questions warrant a more compelling presentation?

Here’s a big question:

“Who gets to control what you are seeing of reality?”

The answer is obvious in the case of TikTok: Entities in some way linked to the Chinese government.

And what about online services working overtime to duplicate the TikTok model? Who is in control of the content, its context, and its concepts?

The answer is, “An outfit that will have unprecedented amount of influence over users’ thoughts and actions.” If those users — digital addicts, perhaps — are not able to recognize manipulation or simply choose to say, “Hey, no big deal”, TikTok-type content systems will be driving folks down the Information Highway. Riders may have no choice. Riders may have to pay to driven around. Riders may not be in control of their behaviors, ideas, and time.

I like the idea of TikTok as an interface. I don’t like touching on big questions and then sidestepping them.

Net net: I won’t pay for access to Vox.

Stephen E Arnold, July 10, 2023

Google and AMP: Good Enough

July 10, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Due to the rise of mobile devices circa the 2010s, the Internet was slammed with slow-loading Web-sites. In 2015, Google told publishers it had a solution dubbed “Accelerated Mobile Pages” (AMP). Everyone bought into AMP but it soon proved to be more like a “Speed Trap” says The Verge.

AMP worked well at first but it was hard to use advertising tools that were not from Google. Google’s plan to make the Internet great again backfired. Seventeen state attorneys filed a lawsuit with AMP as a key topic against Google in 2020. The lawsuit alleges Google purposefully designed AMP to prevent publishers from using alternative ad tools. The US Justice Department filed an antitrust lawsuit in January 2023, claiming Google is attempting to control more of the Internet.

79 googzilla

A creature named Googzilla chats with a well-known publisher about a business relationship. Googzilla is definitely impressed with the publisher’s assertion that quality news can generate traffic and revenue without a certain Web search company’s help. Does the publisher trust Googzilla? Sure, the publisher says, “We just have lunch and chat. No problem.” 

Google promised that AMP would drive more traffic to publishers’ Web sites and it would fix the loading speed lag. Google was the only big tech company that offered a viable solution to the growing demand mobile devices created, so everyone was forced to adopt AMP. Google did not care as long as it was the only player in the game:

“As long as anyone played the game, everybody had to. ‘Google’s strategy is always to create prisoner’s dilemmas that it controls — to create a system such that if only one person defects, then they win,’ a former media executive says. As long as anyone was willing to use AMP and get into that carousel, everyone else had to do the same or risk being left out.”

Google promised AMP would be open source but Google flip-flopped on that decision whenever it suited the company. Non-Google developers “fixed” AMP by working through its locked down structure so it could support other tools. Because of their efforts AMP got better and is now a decent tool. Google, however, trundles along. Perhaps Google is just misunderstood.

Whitney Grace, July 10, 2023

Amazon: Machine-Generated Content Adds to Overhead Costs

July 7, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Amazon Has a Big Problem As AI-Generated Books Flood Kindle Unlimited” makes it clear that Amazon is going to have to re-think how it runs its self-publishing operation and figure out how to deal with machine-generated books from “respected” publishers.

The author of the article is expressing concern about ChatGPT-type outputs being assembled into electronic books. That concern is focused on Amazon and its ageing, arthritic Kindle eBook business. With voice to text tools, I suppose one should think about Audible audiobooks spit out by text-to-voice. The culprit, however, may be Amazon itself. Paying a person read a book for seven hours, not screw up, and making sure the sound is acceptable when the reader has a stuffed nose can be pricey.

7 4 baffled exec

A senior Amazon executive thinks to herself, “How can I fix this fake content stuff? I should really update my LinkedIn profile too.’ Will the lucky executive charged with fixing the problem identified in the article be allowed to eliminate revenue? Yep, get going on the LinkedIn profile first. Tackle the fake stuff later.

The write up points out:

the mass uploading of AI-generated books could be used to facilitate click-farming, where ‘bots’ click through a book automatically, generating royalties from Amazon Kindle Unlimited, which pays authors by the amount of pages that are read in an eBook.

And what’s Amazon doing about this quasi-fake content? The article reports:

It [Amazon] didn’t explicitly state that it was making an effort specifically to address the apparent spam-like persistent uploading of nonsensical and incoherent AI-generated books.

Then, the article raises the issues of “quality” and “authenticity.” I am not sure what these two glory words mean. My impression is that a machine-generated book is not as good as one crafted by a subject matter expert or motivated human author. If I am right, the editors at TechRadar are apparently oblivious to the idea of using XML structure content and a MarkLogic-type tool to slice-and-dice content. Then the components are assembled into a reference book. I want to point out that this method has been in use by professional publishers for a number of years. Because I signed a confidentiality agreement, I am not able to identify this outfit. But I still recall the buzz of excitement that rippled through one officer meeting at this outfit when those listening to a presentation realized [a] Humanoids could be terminated and a reduced staff could produce more books and [b] the guts of the technology was a database, a technology mostly understood by those with a few technical conferences under their belt. Yippy! No one had to learn anything. Just calculate the financial benefit of dumping humans and figuring out how to expense the contractors who could format content from a hovel in a Myanmar-type of low-cost location. At night, the executives dreamed about their bonuses for hitting their financial targets and how to start RIF’ing editorial staff, subject matter experts, and assorted specialists who doodled with front matter, footnotes, and fonts.

Net net: There is no fix. The write up illustrates the lack of understanding about how large sections of the information industry uses technology and the established procedures for dealing with cost-saving opportunity. Quality means more revenue from decisions. Authenticity is a marketing job. Amazon has a content problem and has to gear up its tools and business procedures to cope with machine-generated content whether in product reviews and eBooks.

Stephen E Arnold, July 7, 2023

Pricing Smart Software: Buy Now Because Prices Are Going Up in 18 hours 46 Minutes and Nine Seconds, Eight Seconds, Seven…

July 7, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I ignore most of the apps, cloud, and hybrid products and services infused with artificial intelligence. As one wit observed, AI means artificial ignorance. What I find interesting are the pricing models used by some of the firms. I want to direct your attention to Sheeter.ai. The service let’s one say in natural language something like “Calculate the median of A:Z rows.” The system then spits out the Excel formula which can be pasted into a cell. The Sheeter.ai formula works in Google Sheets too because Google wants to watch Microsoft Excel shrivel and die a painful death. The benefits of the approach are similar to services which convert SQL statements into well-formed SQL code (in theory). Will the dynamic duo of Google and Microsoft implement a similar feature in their spreadsheets? Of course, but Sheeter.ai is betting their approach is better.

The innovation for which Sheeter.ai deserves a pat on the back is its approach to pricing. The screenshot below makes clear that the price one sees on the screen at a particular point in time is going to go up. A countdown timer helps boost user anxiety about price.

image

I was disappointed when the graphics did not include a variant of James Bond (the super spy) chained to an explosive device. Bond, James Bond, was using his brain to deactivate the timer. Obviously he was successful because there have been a half century of Bond, James Bond, films. He survives every time time.

Will other AI-infused products and services implement anxiety patterns to induce people to provide their name, email, and credit card? It seems in line with the direction in which online and AI businesses are moving. Right, Mr. Bond. Nine, eight, seven….

Stephen E Arnold, July 7, 2023

Step 1: Test AI Writing Stuff. Step 2: Terminate Humanoids. Will Outrage Prevent the Inevitable?

July 5, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I am fascinated by the information (allegedly actual factual) in “Gizmodo and Kotaku Staff Furious After Owner Announces Move to AI Content.” Part of my interest is the subtitle:

God, this is gonna be such a f***ing nightmare.

Ah, for whom, pray tell. Probably not for the owners, who may see a pot of gold at the end of the smart software rainbow; for example, Costs Minus Humans Minus Health Care Minus HR Minus Miscellaneous Humanoid costs like latte makers, office space, and salaries / bonuses. What do these produce? More money (value) for the lucky most senior managers and selected stakeholders. Humanoids lose; software wins.

72 nightmare

A humanoid writer sits at desk and wonders if the smart software will become a pet rock or a creature let loose to ruin her life by those who want a better payoff.

For the humanoids, it is hasta la vista. Assume the quality is worse? Then the analysis requires quantifying “worse.” Software will be cheaper over a time interval, expensive humans lose. Quality is like love and ethics. Money matters; quality becomes good enough.

Will, fury or outrage or protests make a difference? Nope.

The write up points out:

“AI content will not replace my work — but it will devalue it, place undue burden on editors, destroy the credibility of my outlet, and further frustrate our audience,” Gizmodo journalist Lin Codega tweeted in response to the news. “AI in any form, only undermines our mission, demoralizes our reporters, and degrades our audience’s trust.” “Hey! This sucks!” tweeted Kotaku writer Zack Zwiezen. “Please retweet and yell at G/O Media about this! Thanks.”

Much to the delight of her significant others, the “f***ing nightmare” is from the creative, imaginative humanoid Ashley Feinberg.

An ideal candidate for early replacement by a software system and a list of stop words.

Stephen E Arnold, July 5, 2023

Academics and Ethics: We Can Make It Up, Right?

July 4, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Bogus academic studies were already a troubling issue. Now generative text and image algorithms are turbocharging the problem. Nature describes how in, “AI Intensifies Fight Against ‘Paper Mills” that Churn Out Fake Research.” Writer Layal Liverpool states:

“Generative AI tools, including chatbots such as ChatGPT and image-generating software, provide new ways of producing paper-mill content, which could prove particularly difficult to detect. These were among the challenges discussed by research-integrity experts at a summit on 24 May, which focused on the paper-mill problem. ‘The capacity of paper mills to generate increasingly plausible raw data is just going to be skyrocketing with AI,’ says Jennifer Byrne, a molecular biologist and publication-integrity researcher at New South Wales Health Pathology and the University of Sydney in Australia. ‘I have seen fake microscopy images that were just generated by AI,’ says Jana Christopher, an image-data-integrity analyst at the publisher FEBS Press in Heidelberg, Germany. But being able to prove beyond suspicion that images are AI-generated remains a challenge, she says. Language-generating AI tools such as ChatGPT pose a similar problem. ‘As soon as you have something that can show that something’s generated by ChatGPT, there’ll be some other tool to scramble that,’ says Christopher.”

Researchers and integrity analysts at the summit brainstormed ideas to combat the growing problem and plan to publish an action plan “soon.” In a related issue, attendees agreed AI can be a legitimate writing aid but considered certain requirements, like watermarking AI-generated text and providing access to raw data.

7 23 make up data

Post-docs and graduate students make up data. MidJourney captures the camaraderie of 21st-century whiz kids rather well. A shared experience is meaningful.

Naturally, such decrees would take time to implement. Meanwhile, readers of academic journals should up their levels of skepticism considerably.

But tenure and grant money are more important than — what’s that concept? — ethical behavior for some.

Cynthia Murrell, July 4, 2023

NSO Group Restructuring Keeps Pegasus Aloft

July 4, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

The NSO Group has been under fire from critics for the continuing deployment if its infamous Pegasus spyware. The company, however, might more resemble a different mythological creature: Since its creditors pulled their support, NSO appears to be rising from the ashes.

7 2 pegasus aloft

Pegasus continues to fly. Can it monitor some of the people who have mobile phones? Not in ancient Greece. Other places? I don’t know. MidJourney’s creative powers does not shed light on this question.

The Register reports, “Pegasus-Pusher NSO Gets New Owner Keen on the Commercial Spyware Biz.” Reporter Jessica Lyons Hardcastle writes:

“Spyware maker NSO Group has a new ringleader, as the notorious biz seeks to revamp its image amid new reports that the company’s Pegasus malware is targeting yet more human rights advocates and journalists. Once installed on a victim’s device, Pegasus can, among other things, secretly snoop on that person’s calls, messages, and other activities, and access their phone’s camera without permission. This has led to government sanctions against NSO and a massive lawsuit from Meta, which the Supreme Court allowed to proceed in January. The Israeli company’s creditors, Credit Suisse and Senate Investment Group, foreclosed on NSO earlier this year, according to the Wall Street Journal, which broke that story the other day. Essentially, we’re told, NSO’s lenders forced the biz into a restructure and change of ownership after it ran into various government ban lists and ensuing financial difficulties. The new owner is a Luxembourg-based holding firm called Dufresne Holdings controlled by NSO co-founder Omri Lavie, according to the newspaper report. Corporate filings now list Dufresne Holdings as the sole shareholder of NSO parent company NorthPole.”

President Biden’s executive order notwithstanding, Hardcastle notes governments’ responses to spyware have been tepid at best. For example, she tells us, the EU opened an inquiry after spyware was found on phones associated with politicians, government officials, and civil society groups. The result? The launch of an organization to study the issue. Ah, bureaucracy! Meanwhile, Pegasus continues to soar.

Cynthia Murrell, July 4, 2023

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta