Digital Delphis: Predictions More Reliable Than Checking Pigeon Innards, We Think

July 28, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

One of the many talents of today’s AI is apparently a bit of prophecy. Interconnected examines “Computers that Live Two Seconds in the Future.” Blogger Matt Webb pulls together three examples that, to him, represent an evolution in computing.

7 22 digital delphi

The AI computer, a digital Delphic oracle, gets some love from its acolytes. One engineer says, “Any idea why the system is hallucinating?” The other engineer replies, “No clue.” MidJourney shows some artistic love to hard-working, trustworthy computer experts.

His first digital soothsayer is Apple’s Vision Pro headset. This device, billed as a “spatial computing platform,” takes melding the real and virtual worlds to the next level. To make interactions as realistic as possible, the headset predicts what a user will do next by reading eye movements and pupil dilation. The Vision Pro even flashes visuals and sounds so fast as to be subliminal and interprets the eyes’ responses. Ingenious, if a tad unsettling.

The next example addresses a very practical problem: WavePredictor from Next Ocean helps with loading and unloading ships by monitoring wave movements and extrapolating the next few minutes. Very helpful for those wishing to avoid cargo sloshing into the sea.

Finally, Webb cites a development that both excites and frightens programmers: GitHub Copilot. Though some coders worry this and similar systems will put them out of a job, others see it more as a way to augment their own brilliance. Webb paints the experience as a thrilling bit of time travel:

“It feels like flying. I skip forwards across real-time when writing with Copilot. Type two lines manually, receive and get the suggestion in spectral text, tab to accept, start typing again… OR: it feels like reaching into the future and choosing what to bring back. It’s perhaps more like the latter description. Because, when you use Copilot, you never simply accept the code it gives you. You write a line or two, then like the Ghost of Christmas Future, Copilot shows you what might happen next – then you respond to that, changing your present action, or grabbing it and editing it. So maybe a better way of conceptualizing the Copilot interface is that I’m simulating possible futures with my prompt then choosing what to actualize. (Which makes me realize that I’d like an interface to show me many possible futures simultaneously – writing code would feel like flying down branching time tunnels.)”

Gnarly dude! But what does all this mean for the future of computing? Even Webb is not certain. Considering operating systems that can track a user’s focus, geographic location, communication networks, and augmented reality environments, he writes:

“The future computing OS contains of the model of the future and so all apps will be able to anticipate possible futures and pick over them, faster than real-time, and so… …? What happens when this functionality is baked into the operating system for all apps to take as a fundamental building block? I don’t even know. I can’t quite figure it out.”

Us either. Stay tuned dear readers. Oh, let’s assume the wizards get the digital Delphic oracle outputting the correct future. You know, the future that cares about humanoids.

Cynthia Murrell, July 28, 2023

AI and Malware: An Interesting Speed Dating Opportunity?

July 27, 2023

Note: Dinobaby here: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid. Services are now ejecting my cute little dinosaur gif. (´?_?`) Like my posts related to the Dark Web, the MidJourney art appears to offend someone’s sensibilities in the datasphere. If I were not 78, I might look into these interesting actions. But I am and I don’t really care.

AI and malware. An odd couple? One of those on my research team explained at lunch yesterday that an enterprising bad actor could use one of the code-savvy generative AI systems and the information in the list of resources compiled by 0xsyr0 and available on GitHub here. The idea is that one could grab one or more of the malware development resources and do some experimenting with an AI system. My team member said the AmsiHook looked interesting as well as Freeze. Is my team member correct? Allegedly next week he will provide an update at our weekly meeting. My question is, “Do the recent assertions about smart software cover this variant of speed dating?”

Stephen E Arnold, July 27, 2023

Netflix Has a Job Opening. One Job Opening to Replace Many Humanoids

July 27, 2023

Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I read “As Actors Strike for AI Protections, Netflix Lists $900,000 AI Job.” Obviously the headline is about AI, money, and entertainment. Is the job “real”? Like so much of the output of big companies, it is difficult to determine how much is clickbait, how much is surfing on “real” journalists’ thirst for the juicy info, and how much is trolling? Yep, trolling. Netflix drives a story about AI’s coming to Hollywood.

The write up offers Hollywood verbiage and makes an interesting point:

The [Netflix job] listing points to AI’s uses for content creation:“Artificial Intelligence is powering innovation in all areas of the business,” including by helping them to “create great content.” Netflix’s AI product manager posting alludes to a sprawling effort by the business to embrace AI, referring to its “Machine Learning Platform” involving AI specialists “across Netflix.”

The machine learning platform or MLP is an exercise in cost control, profit maximization, and presaging the future. If smart software can generate new versions of old content, whip up acceptable facsimiles, and eliminate insofar as possible the need for non-elite humans — what’s not clear.

The $900,000 may be code for “Smart software can crank out good enough content at lower cost than traditional Hollywood methods.” Even the TikTok and YouTube “stars” face an interesting choice: [a] Figure out how to offload work to smart software or [b] learn to cope with burn out, endless squabbles with gatekeepers about money, and the anxiety of becoming a has-been.

Will humans, even talented ones, be able to cope with the pressure smart software will exert on the production of digital content? Like the junior attorney and cannon fodder for blue chip consulting companies, AI is moving from spitting out high school essays to more impactful outputs.

One example is the integration of smart software into workflows. The jargon about this enabling use of smart software is fluid. The $900,000 job focuses on something that those likely to be affected can understand: A good enough script and facsimile actors and actresses with a mouse click.

But the embedded AI promises to rework the back office processes and the unseen functions of humans just doing their jobs. My view is that there will be $900K per year jobs but far fewer of them than there are regular workers. What is the future for those displaced?

Crafting? Running yard sales? Creating fine art?

Stephen E Arnold, July 27, 2023

AI Leaders and the Art of Misdirection

July 27, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Lately, leaders at tech companies seem to have slipped into a sci-fi movie.

7 22 young liar

“Trust me. AI is really good. I have been working to create a technology which will help the world. I want to make customers you, Senator, trust us. I and other AI executives want to save whales. We want the snail darter to thrive. We want the homeless to have suitable housing. AI will deliver this and more plus power and big bucks to us!” asserts the sincere AI wizard with a PhD and an MBA.

Rein in our algorithmic monster immediately before it takes over the world and destroys us all! But AI Snake Oil asks, “Is Avoiding Extinction from AI Really an Urgent Priority?” Or is it a red herring? Writers Seth Lazar, Jeremy Howard, and Arvind Narayanan consider:

“Start with the focus on risks from AI. This is an ambiguous phrase, but it implies an autonomous rogue agent. What about risks posed by people who negligently, recklessly, or maliciously use AI systems? Whatever harms we are concerned might be possible from a rogue AI will be far more likely at a much earlier stage as a result of a ‘rogue human’ with AI’s assistance. Indeed, focusing on this particular threat might exacerbate the more likely risks. The history of technology to date suggests that the greatest risks come not from technology itself, but from the people who control the technology using it to accumulate power and wealth. The AI industry leaders who have signed this statement are precisely the people best positioned to do just that. And in calling for regulations to address the risks of future rogue AI systems, they have proposed interventions that would further cement their power. We should be wary of Prometheans who want to both profit from bringing the people fire, and be trusted as the firefighters.”

Excellent point. But what, specifically, are the rich and powerful trying to distract us from here? Existing AI systems are already causing harm, and have been for some time. Without mitigation, this problem will only worsen. There are actions that can be taken, but who can focus on that when our very existence is (supposedly) at stake? Probably not our legislators.

Cynthia Murrell, July 27, 2023

Will Smart Software Take Customer Service Jobs? Do Grocery Stores Raise Prices? Well, Yeah, But

July 26, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I have suggested that smart software will eliminate some jobs. Who will be doing the replacements? Workers one finds on Fiverr.com? Interns who will pay to learn something which may be more useful than a degree in art history? RIF’ed former employees who are desperate for cash and will work for a fraction of their original salary?

7 22 robot woman

“Believe it or not, I am here to help you. However, I strong suggest you learn more about the technology used to create software robots and helpers like me. I also think you have beautiful eyes. My are just blue LEDs, but the Terminator finds them quite attractive,” says the robot who is learning from her human sidekick. Thanks, MidJourney, you have the robot human art nailed.

The fact is that smart software will perform many tasks once handled by humans? Don’t believe me. Visit a local body shop. Then take a tour of the Toyota factory not too distant from Tokyo’s airport. See the difference? The local body shop is swarming with folks who do stuff with their hands, spray guns, and machines which have been around for decades. The Toyota factory is not like that.

Machines — hardware, software, or combos — do not take breaks. They do not require vacations. They do not complain about hard work and long days. They, in fact, are lousy machines.

Therefore, the New York Times’s article “Training My Replacement: Inside a Call Center Worker’s Battle with AI”  provides a human interest glimpse of the terrors of a humanoid who sees the writing on the wall. My hunch is that the New York Times’s “real news” team will do more stories like this.

However, it would be helpful to people like to include information such as a reference or a subtle nod to information like this: “There Are 4 Reasons Why Jobs Are Disappearing — But AI Isn’t One of Them.” What are these reasons? Here’s a snapshot:

  • Poor economic growth
  • Higher costs
  • Supply chain issues (real, convenient excuse, or imaginary)
  • That old chestnut: Covid. Boo.

Do I buy the report? I think identification of other factors is a useful exercise. In the short term, many organizations are experimenting with smart software. Few are blessed with senior executives who trust technology when those creating the technology are not exactly sure what’s going on with their digital whiz kids.

The Gray Lady’s “real news” teams should be nervous. The wonderful, trusted, reliable Google is allegedly showing how a human can use Google AI to help humans with creating news.

Even art history major should be suspicious because once a leader in carpetland hears about the savings generated by deleting humanoids and their costs, those bean counters will allow an MBA to install software. Remember, please, that the mantra of modern management is money and good enough.

Stephen E Arnold, July 26, 2023

Hedge Funds and AI: Lovers at First Sight

July 26, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

One promise of AI is that it will eliminate tedious tasks (and the jobs that with them). That promise is beginning to be fulfilled in the investment arena, we learn from the piece, “Hedge Funds Are Deploying ChatGPT to Handle All the Grunt Work,” shared by Yahoo Finance. What could go wrong?

7 22 swim in money

Two youthful hedge fund managers are so pleased with their AI-infused hedge fund tactics, they jumped in a swimming pool which is starting to fill with money. Thanks, MidJourney. You have nailed the happy bankers and their enjoyment of money raining down.

Bloomberg’s Justina Lee and Saijel Kishan write:

“AI on Wall Street is a broad church that includes everything from machine-learning algorithms used to compute credit risks to natural language processing tools that scan the news for trading. Generative AI, the latest buzzword exemplified by OpenAI’s chatbot, can follow instructions and create new text, images or other content after being trained on massive amounts of inputs. The idea is that if the machine reads enough finance, it could plausibly price an option, build a portfolio or parse a corporate news headline.”

Parse the headlines for investment direction. Interesting. We also learn:

“Fed researchers found [ChatGPT] beats existing models such as Google’s BERT in classifying sentences in the central bank’s statements as dovish or hawkish. A paper from the University of Chicago showed ChatGPT can distill bloated corporate disclosures into their essence in a way that explains the subsequent stock reaction. Academics have also suggested it can come up with research ideas, design studies and possibly even decide what to invest in.”

Sounds good in theory, but there is just one small problem (several, really, but let’s focus on just the one): These algorithms make mistakes. Often. (Scroll down in this GitHub list for the ChatGPT examples.) It may be wise to limit one’s investments to firms patient enough to wait for AI to become more reliable.

Cynthia Murrell, July 26, 2023

Google the Great Brings AI to Message Searches

July 25, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

AI is infiltrating Gmail users’ inboxes. Android Police promises, “Gmail’s New Machine Learning Models Will Improve your Search Results.” Writer Chethan Rao points out this rollout follows June’s announcement of the Help me write feature, which deploys an algorithm to compose one’s emails. He describes the new search tool:

“The most relevant search results are listed under a section called Top results after this update. The rest of them will be listed beneath All results in mail, with these being filtered based on recency, according to the Workspace Blog. Google says this would let people find what they’re looking for ‘with less effort.’ Expanding on the methodology a little bit, the company said (via 9to5Google) its machine learning models will take into account the search term itself, in addition to the most recent emails and ‘other relevant factors’ to pull up the results best suited for the user. The functionality has just begun rolling out this Friday [May 02, 2023], so it could take a couple of weeks before making it to all Workspace or personal Google account holders. Luckily, there are no toggles to enable this feature, meaning it will be automatically enabled when it reaches your device.”

“Other relevant factors.” Very transparent. Kind of them to eliminate the pesky element of choice here. We hope the system works better that Gmail’s recent blue checkmark system (how original), which purported to mark senders one can trust but ended up doing the opposite.

Buckle up. AI will be helping you in every Googley way.

Cynthia Murrell, July 25, 2023

AI Commitments: But What about Chipmunks and the Bunny Rabbits?

July 23, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI sent executives to a meeting held in “the White House” to agree on some ground rules for “artificial intelligence.” AI is available from a number of companies and as free downloads as open source. Rumors have reached me suggesting that active research and development are underway in government agencies, universities, and companies located in a number of countries other than the U.S. Some believe the U.S. is the Zoe of AI, assisted by Naiads. Okay, but you know those Greek gods can be unpredictable.

Thus, what’s a commitment? I am not sure what the word means today. I asked You.com, a smart search system to define the term for me. The system dutifully return this explanation:

commitment is defined as “an agreement or pledge to do something in the future; the state or an instance of being obligated or emotionally impelled; the act of committing, especially the act of committing a crime.” In general, commitment refers to a promise or pledge to do something, often with a strong sense of dedication or obligation. It can also refer to a state of being emotionally invested in something or someone, or to the act of carrying out a particular action or decision.

Several words and phrases jumped out at me; namely, “do something in the future.” What does “do” mean? What is “the future?” Next week, next month, a decade from a specific point in time, etc.? “Obligated” is an intriguing word. What compels the obligation? A threat, a sense of duty, and understanding of a shared ethical fabric? “Promise” evokes a young person’s statement to a parent when caught drinking daddy’s beer; for example, “Mom, I promise I won’t do that again.” The “emotional” investment is an angle that reminds me that 40 to 50 percent of first marriages end in divorce. Commitments — even when bound by social values — are flimsy things for some. Would I fly on a commercial airline whose crash rate was 40 to 50 percent? Would you?

7 23 broken window

“Okay, we broke the window? Now what do we do?” asks the leader of the pack. “Run,” says the brightest of the group. “If we are caught, we just say, “Okay, we will fix it.” “Will we?” asks the smallest of the gang. “Of course not,” replies the leader. Thanks MidJourney, you create original kid images well.

Why make any noise about commitment?

I read “How Do the White House’s A.I. Commitments Stack Up?” The write up is a personal opinion about an agreement between “the White House” and the big US players in artificial intelligence. The focus was understandable because those in attendance are wrapped in the red, white, and blue; presumably pay taxes; and want to do what’s right, save the rain forest, and be green.

Some of the companies participating in the meeting have testified before Congress. I recall at least one of the firms’ senior managers say, “Senator, thank you for that question. I don’t know the answer. I will have my team provide that information to you…” My hunch is that a few of the companies in attendance at the White House meeting could use the phrase or a similar one at some point in the “future.”

The table below lists most of the commitments to which the AI leaders showed some receptivity. The table presents the commitments in the left hand column and the right hand column offers some hypothesized reactions from a nation state quite opposed to the United States, the US dollar, the hegemony of US technology, baseball, apple pie, etc.

Commitments Gamed Responses
Security testing before release Based on historical security activities, not to worry
Sharing AI information Let’s order pizza and plan a front company based in Walnut Creek
Protect IP about models Let’s canvas our AI coders and pick some to get jobs at these outfits
Permit pentesting Yes, pentesting. Order some white hats with happy faces
Tell users when AI content is produced Yes, let’s become registered users. Who has a cousin in Mountain View?
Report about use of the AI technologies Make sure we are on the mailing list for these reports
Research AI social risks Do we own a research firm? Can we buy the research firm assisting these US companies?
Use AI to fix up social ills What is a social ill? Call the general, please, and ask.

The PR angle is obvious. I wonder if commitments will work. The firms have one objective; that is, meet the expectations of their stakeholders. In order to do that, the firms must operate from the baseline of self-interest.

Net net: A plot of techno-land now have a few big outfits working and thinking hard how to buy up the best plots. What about zoning, government regulations, and doing good things for small animals and wild flowers? Yeah. No problem.

Stephen E Arnold, July 23, 2023

Yet Another Way to Spot AI Generated Content

July 21, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

The dramatic emergence of ChatGPT has people frantically searching for ways to distinguish AI-generated content from writing by actual humans. Naturally, many are turning to AI solutions to solve an AI problem. Some tools have been developed that detect characteristics of dino-baby writing, like colloquialisms and emotional language. Unfortunately for the academic community, these methods work better on Reddit posts and Wikipedia pages than academic writings. After all, research papers have employed a bone-dry writing style since long before the emergence of generative AI.

7 16 which teacup

Which tea cup is worth thousands and which is a fabulous fake? Thanks, MidJourney. You know your cups or you are in them.

Cell Reports Physical Science details the development of a niche solution in the ad article, “Distinguishing Academic Science Writing from Humans or ChatGPT with Over 99% Accuracy Using Off-the-Shelf Machine Learning Tools.” We learn:

“In the work described herein, we sought to achieve two goals: the first is to answer the question about the extent to which a field-leading approach for distinguishing AI- from human-derived text works effectively at discriminating academic science writing as being human-derived or from ChatGPT, and the second goal is to attempt to develop a competitive alternative classification strategy. We focus on the highly accessible online adaptation of the RoBERTa model, GPT-2 Output Detector, offered by the developers of ChatGPT, for several reasons. It is a field-leading approach. Its online adaptation is easily accessible to the public. It has been well described in the literature. Finally, it was the winning detection strategy used in the two most similar prior studies. The second project goal, to build a competitive alternative strategy for discriminating scientific academic writing, has several additional criteria. We sought to develop an approach that relies on (1) a newly developed, relevant dataset for training, (2) a minimal set of human-identified features, and (3) a strategy that does not require deep learning for model training but instead focuses on identifying writing idiosyncrasies of this unique group of humans, academic scientists.”

One of these idiosyncrasies, for example, is a penchant for equivocal terms like “but,” “however,” and “although.” Developers used the open source XGBoost software library for this project. The write-up describes the tool’s development and results at length, so navigate there for those details. But what happens, one might ask, the next time ChatGPT levels up? and the next? and so on? We are assured developers have accounted for this game of cat and mouse and will release updated tools quickly each time the chatbot evolves. What a winner—for the marketing team, that is.

Cynthia Murrell, July 21, 2023

Will AI Replace Interface Designers? Sure, Why Not?

July 20, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Lost in the flaming fluff surrounding generative AI is one key point: Successful queries require specialized expertise. A very good article from the Substack blog Public Experiments clearly explains why “Natural Language Is an Unnatural Interface.”

7 16 modern interface

A modern, intuitive, easy-to-use interfaces. That’s the ticket, MidJourney. Thanks for the output.

We have been told to approach ChatGPT and similar algorithms as we would another human. That’s persuasive marketing but terrible advice. See the post for several reasons this is so (beyond the basic fact that AIs are not humans.) Instead, advises writer Varun Shenoy, developers must create user-friendly interfaces that carry one right past the pitfalls. He explains:

“An effective interface for AI systems should provide guardrails to make them easier for humans to interact with. A good interface for these systems should not rely primarily on natural language, since natural language is an interface optimized for human-to-human communication, with all its ambiguity and infinite degrees of freedom. When we speak to other people, there is a shared context that we communicate under. We’re not just exchanging words, but a larger information stream that also includes intonation while speaking, hand gestures, memories of each other, and more. LLMs unfortunately cannot understand most of this context and therefore, can only do as much as is described by the prompt. Under that light, prompting is a lot like programming. You have to describe exactly what you want and provide as much information as possible. Unlike interacting with humans, LLMs lack the social or professional context required to successfully complete a task. Even if you lay out all the details about your task in a comprehensive prompt, the LLM can still fail at producing the result that you want, and you have no way to find out why. Therefore, in most cases, a ‘prompt box’ should never be shoved in a user’s face. So how should apps integrate LLMs? Short answer: buttons.”

Users do love buttons. And though this advice might seem like an oversimplification, Shenoy observes most natural-language queries fall into one of four categories: summarization, simple explanations, multiple perspectives, and contextual responses. The remaining use cases are so few he is comfortable letting ChatGPT handle them. Shenoy points to GitHub Copilot as an example of an effective constrained interface. He feels so strongly about the need to corral queries he expects such interfaces will be *the* products of the natural language field. One wonders—when will such a tool pop up in the MS Office Suite? And when it does, will the fledgling Prompt Engineering field become obsolete before it ever leaves the nest?

Cynthia Murrell, July 20, 2023

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta