Yet Another Way to Spot AI Generated Content

July 21, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

The dramatic emergence of ChatGPT has people frantically searching for ways to distinguish AI-generated content from writing by actual humans. Naturally, many are turning to AI solutions to solve an AI problem. Some tools have been developed that detect characteristics of dino-baby writing, like colloquialisms and emotional language. Unfortunately for the academic community, these methods work better on Reddit posts and Wikipedia pages than academic writings. After all, research papers have employed a bone-dry writing style since long before the emergence of generative AI.

7 16 which teacup

Which tea cup is worth thousands and which is a fabulous fake? Thanks, MidJourney. You know your cups or you are in them.

Cell Reports Physical Science details the development of a niche solution in the ad article, “Distinguishing Academic Science Writing from Humans or ChatGPT with Over 99% Accuracy Using Off-the-Shelf Machine Learning Tools.” We learn:

“In the work described herein, we sought to achieve two goals: the first is to answer the question about the extent to which a field-leading approach for distinguishing AI- from human-derived text works effectively at discriminating academic science writing as being human-derived or from ChatGPT, and the second goal is to attempt to develop a competitive alternative classification strategy. We focus on the highly accessible online adaptation of the RoBERTa model, GPT-2 Output Detector, offered by the developers of ChatGPT, for several reasons. It is a field-leading approach. Its online adaptation is easily accessible to the public. It has been well described in the literature. Finally, it was the winning detection strategy used in the two most similar prior studies. The second project goal, to build a competitive alternative strategy for discriminating scientific academic writing, has several additional criteria. We sought to develop an approach that relies on (1) a newly developed, relevant dataset for training, (2) a minimal set of human-identified features, and (3) a strategy that does not require deep learning for model training but instead focuses on identifying writing idiosyncrasies of this unique group of humans, academic scientists.”

One of these idiosyncrasies, for example, is a penchant for equivocal terms like “but,” “however,” and “although.” Developers used the open source XGBoost software library for this project. The write-up describes the tool’s development and results at length, so navigate there for those details. But what happens, one might ask, the next time ChatGPT levels up? and the next? and so on? We are assured developers have accounted for this game of cat and mouse and will release updated tools quickly each time the chatbot evolves. What a winner—for the marketing team, that is.

Cynthia Murrell, July 21, 2023

Threads: Maybe Bad Fuel?

July 20, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Is this headline sad? “Threads Usage Drops By Half From Initial Surge” reports that the alleged cage fighters’ social messaging systems are in flux.

7 18 looks dead to me

“That old machine sure looks dead to me,” observes the car owner to his associates. One asks, “Can it be fixed?” The owner replies, “I hope not.” MidJourney deserves a pat on its digital head for this art work, doesn’t it.

This week it is the Zuckbook in decline. The cited article reports:

On its best day, July 7, Threads had more than 49 million daily active users on Android, worldwide, according to Similarweb estimates. That’s about 45% of the usage of Twitter, which had more than 109 million active Android users that day. By Friday, July 14, Threads was down to 23.6 million active users, or about 22% of Twitter’s audience.

The message is, “Threads briefly captured a big chunk of Twitter’s market.”

The cited article adds some sugar to the spoiled cake:

If Threads succeeds vs Twitter, the Instagram edge will be a big reason.

Two outstanding services. Two outstanding leaders. Can the social messaging sector pick a winner? Does anyone wonder how much information influence the winner will have?

I do. Particularly when the two horses in the race Musk from Beyond and Zuck the Muscular.

Stephen E Arnold, July 20, 2023

Grasping at Threads and Missing Its Potential for Weaponized Information Delivery

July 20, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

My great grandmother, bless her, used to draw flowers in pots. She used Crayola crayons. Her “style” reminded me of a ball of tangled threat. Examples of her work are lost to time. But MidJourney produced this image which is somewhat similar to how she depicted blooms:

7 15 thread ball

This is a rather uninspiring ball of thread generated by the once-creative MidJourney. Perhaps the system is getting tired?

Keep in mind I am recounting what I recall when I was in grade school in the early 1950s. I thought of these tangled images when I read “Engagement on Instagram’s Threads Has Cratered.” The article suggests that users are losing interest in the Zuck’s ball of thread. I noted this statement:

Time spent on the app dropped over 50% from 20 minutes to 8 minutes, analysts found.

I have spent some time with analysts in my career. I know that data can be as malleable as another toy in a child’s box of playthings; specifically, the delightfully named and presumably non-toxic Play-Doh.

The article offers this information too:

Threads was unveiled as Meta’s Twitter killer and became available for download in the U.S. on July 5, and since then, the platform has garnered well over 100 million users, who are able to access it directly from Instagram. The app has not come without its fair share of issues, however.

Threads — particularly when tangled up — can be a mess. But the Zuckbook has billions of users of its properties. A new service  taps an installed base and has a trampoline effect. When I was young, trampolines were interesting for a short period of time. The article is not exactly gleeful, but I detected some negativity toward the Zuck’s most recent innovation in me-too technology.

Now back to my great-grandmother (bless her, of course). She took the notion of tangled thread and converted them into flower blossoms. My opinion is that Threads will become another service used by actors less benign that my great-grandmother (bless her again). The ability to generate weaponized information, link to those little packets of badness, and augment other content is going to be of interest to some entities.

A free social media service can deliver considerable value to a certain segment of online users. The Silicon Valley “real” news folks may be writing about threads to say, “The Zuck’s Thread service is a tangled mess.” The more important angle, in my opinion, is that it provides another, possibly quite useful service to those who seek to cause effects not nearly as much fun as saying, “Zuck’s folly flops.” It may, but in the meantime, Threads warrants close observation, not Play-Doh data. Perhaps those wrestling with VPN bans will explore technical options for bypassing packet inspection, IP blocks, port blocks, Fiverr gig workers, or colleagues in the US?

Stephen E Arnold, July 20, 2023

Will AI Replace Interface Designers? Sure, Why Not?

July 20, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Lost in the flaming fluff surrounding generative AI is one key point: Successful queries require specialized expertise. A very good article from the Substack blog Public Experiments clearly explains why “Natural Language Is an Unnatural Interface.”

7 16 modern interface

A modern, intuitive, easy-to-use interfaces. That’s the ticket, MidJourney. Thanks for the output.

We have been told to approach ChatGPT and similar algorithms as we would another human. That’s persuasive marketing but terrible advice. See the post for several reasons this is so (beyond the basic fact that AIs are not humans.) Instead, advises writer Varun Shenoy, developers must create user-friendly interfaces that carry one right past the pitfalls. He explains:

“An effective interface for AI systems should provide guardrails to make them easier for humans to interact with. A good interface for these systems should not rely primarily on natural language, since natural language is an interface optimized for human-to-human communication, with all its ambiguity and infinite degrees of freedom. When we speak to other people, there is a shared context that we communicate under. We’re not just exchanging words, but a larger information stream that also includes intonation while speaking, hand gestures, memories of each other, and more. LLMs unfortunately cannot understand most of this context and therefore, can only do as much as is described by the prompt. Under that light, prompting is a lot like programming. You have to describe exactly what you want and provide as much information as possible. Unlike interacting with humans, LLMs lack the social or professional context required to successfully complete a task. Even if you lay out all the details about your task in a comprehensive prompt, the LLM can still fail at producing the result that you want, and you have no way to find out why. Therefore, in most cases, a ‘prompt box’ should never be shoved in a user’s face. So how should apps integrate LLMs? Short answer: buttons.”

Users do love buttons. And though this advice might seem like an oversimplification, Shenoy observes most natural-language queries fall into one of four categories: summarization, simple explanations, multiple perspectives, and contextual responses. The remaining use cases are so few he is comfortable letting ChatGPT handle them. Shenoy points to GitHub Copilot as an example of an effective constrained interface. He feels so strongly about the need to corral queries he expects such interfaces will be *the* products of the natural language field. One wonders—when will such a tool pop up in the MS Office Suite? And when it does, will the fledgling Prompt Engineering field become obsolete before it ever leaves the nest?

Cynthia Murrell, July 20, 2023

LLM Unreliable? Probably Absolutely No Big Deal Whatsoever For Sure

July 19, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

My team and I are working on an interesting project. Part of that work requires that we grind through papers, journal articles, and self-published (and essentially unverifiable) comments about smart software.

7 19 unreliable

“What do you mean the outputs from the smart software I have been using for my homework delivers the wrong answer?” says this disappointed user of a browser and word processor with artificial intelligence baked in. Is she damning recursion? MidJourney created this emotion-packed image of a person who has learned that she has been accursed of plagiarism by her Sociology 215 professor.

Not surprisingly, we come across some wild and crazy information. On rare occasions we come across a paper, mostly ignored, which presents information that confirms many of our tests of smart software. When we do tests, we arrive with specific queries in mind. These relate to the behaviors of bad actors; for example, online services which front for cyber criminals, systems which are purpose built to make it time consuming to unmask a bad actor, and determine what person owns a particular domain engaged in the sale of fullz.

You can probably guess that most of the smart and dumb online finding services are of little or no help. We have to check these, however, simply because we want to be thorough. At a meeting last week, one of my team members who has a degree in library science, pointed out that the outputs from the services we use were becoming less useful than they were several months ago. I don’t spend too much time testing these services because I am a dinobaby and I run projects. My doing days are over. But I do listen to informed feedback. Her comment was one I had not seen in the Google PR onslaught about its method, the utterances of Sam AI-Man at OpenAI, or from the assorted LinkedIn gurus who post about smart software.

Then I spotted “How Is ChatGPT’s Behavior Changing over Time?

I think the authors of the paper have documented what my team member articulated to me and others working on a smart software project. The paper states is polite academic prose:

Our findings demonstrate that the behavior of GPT-3.5 and GPT-4 has varied significantly over a relatively short amount of time.

The authors provide some data, a few diagrams, and some footnotes.

What is fascinating is that the most significant item in the journal article, in my opinion, is the use of the word “drifts.” Here’s the specific line:

Monitoring reveals substantial LLM drifts.

Yep, drifts.

What exactly is a drift in a numerical mélange like a large language model, its algorithms, and its probabilistic pulsing? In a nutshell, LLMs are formed by humans and use information to some degree created by humans. The idea is that sharp corners are created from decisions and data which may have rounded corners or be the equivalent of wad of Play-Doh after a kindergartener manipulates the stuff. The idea is that layers of numerical recipes are hooked together to output information useful to a human or system.

Those who worked with early versions of the Autonomy Neuro Linguistic black box know about the Play-Doh effect. Train the system on a crafted set of documents (information). Run test queries. Adjust a few knobs and dials afforded by the Autonomy system. Turn it loose on the Word documents and other content for which filters were installed. Then let users run queries.

To be upfront, using the early version of Autonomy in 1999 or 2000 was pretty darned good. However, Autonomy recommended that the system be retrained every few months.

Why?

The answer, as I recall, is that as new data were encountered by the Autonomy Neuro Linguistic engine, the engine had to cope with new words, names of companies, and phrases. Without retraining, the system would use what it had from its initial set up and tuning. Without retraining or recalibration, the Autonomy system would return results which were less useful in some situations. Operate a system without retraining, the results would degrade over time.

Math types labor to make inference-hooked and probabilistic systems stay on course. The systems today use tricks that make a controlled vocabulary look like the tool of a dinobaby like me. Without getting into the weeds, the Autonomy system would drift.

And what does the cited paper say, “LLM drift too.”

What does this mean? Here’s my dinobaby list of items to keep in mind:

  1. Smart software, if left to its own devices, will degrade over time; that is, outputs will drift from what the user wants. Feedback from users accelerates the drift because some feedback is from the smart software’s point of view is spot on even if it is crazy or off the wall. Do this over a period of time and you get what the paper’s authors and my team member pointed out: Degradation.
  2. Users who know how to look at a system’s outputs and validate or identify off the mark results can take corrective action; that is, ignore the outputs or fix them up. This is not common, and it requires specialized knowledge, time, and mental sharpness. Those who depend on TikTok or a smart system may not have these qualities in equal amounts.
  3. Entrepreneurs want money, power, or a new Tesla. Bringing up issues about smart software growing increasingly crazy like the dinobaby down the street is not valued. Hence, substantive problems with smart systems will require time, money, and expertise to remediate. Who wants that? Smart software is designed to improve efficiency, reduce costs, and make money. The result is a group of individuals who do PR, not up-to-snuff software.

Will anyone pay attention to this cited journal article? Sure, a few interns and maybe a graduate student or two. But at this time, the trend is that AI works and AI applied to something delivers a solution. Is that solution reliable or is it just good enough? What if the outputs deteriorate in a subtle way over time? What’s the fix? Who is responsible? The engineer who fiddled with thresholds? The VP of product development who dismissed objections about inherent bias in outputs?

I think you may have an answer to these questions. As a dinobaby, I can say, “Folks, I don’t have a clue about fixing up the smart software juggernaut.” I am skeptical of those who say, “Hey, it just works.” Okay, I hope you are correct.

Stephen E Arnold, July 19, 2023

Smart Software: Good Enough Plus 18 Percent More Quality

July 19, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Do I believe the information in “ChatGPT Can Turn Bad Writers into Better Ones”? No, I don’t. First, MIT is the outfit which had a special relationship with Jeffrey Epstein. Yep, that guy. Quite a pal. Second, academic outfits are known to house individuals who just make up or enhance research data. Does MIT have professors who do that? Of course not. But With Harvard professionals engaging in some ethical ballroom dancing with data, I want to be cautious. (And, please, navigate to the original write up and read the report. Subscribe too because Mr. Epstein is indisposed and unable to contribute to the academic keel of the scholarly steamboat.)

What counts, however, is perception, not reality. The write up fosters some Chemical Guys’s shine on information, so let’s take a look. It will be a shallow one because that is the spirit of some research today, and this dinobaby wants to get with the program. My writing may be lousy, but I do it myself, which seems to go against the current trend.

Here’s the core point in the write from my point of view in rural Kentucky, a state known for its intellectual rigor and fine writing about basketball:

A new study by two MIT economics graduate students … suggests it could help reduce gaps in writing ability between employees. They found that it could enable less experienced workers who lack writing skills to produce work similar in quality to that of more skilled colleagues.

The point in my opinion is that cheaper workers can do what more expensive workers can do.

Just to drive home the point, the write up included this point:

The writers who chose to use ChatGPT took 40% less time to complete their tasks, and produced work that the assessors scored 18% higher in quality than that of the participants who didn’t use it.

7 16 winning with ai

The MidJourney highly original art system produced this picture of an accountant, trained online by the once proud University of Phoenix, manifests great joy when discovering that smart software can produce marketing and PR collateral faster, cheaper, and better than a disgruntled English major wanting to rent a larger apartment in a big city. The accountant seems to be sitting in a modest thundershower of budget surplus.

For many, MIT has heft. Therefore, will this write up and the expert researchers’ data influence people; for instance, owners of marketing, SEO, reputation management, and PR companies?

Yep.

Observations:

  1. Layoffs will be accelerating
  2. Good enough becomes outstanding when financial benefits are fungible
  3. Assurances about employment security will be irrelevant.

And what about those MIT graduates? Better get a degree in math, computer science, engineering, or medieval English poetry. No, strike that medieval English poetry. Substitute “prompt engineer” or museum guide in Albania.

Stephen E Arnold, July 19, 2023

AI-Search Tool Talpa Burrows Into Library Catalogues

July 19, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

For a few years now, libraries have been able to augment their online catalogue with enrichment services from Syndetics Unbound, which adds details and imagery to each entry. Now the company is incorporating new AI capabilities, we learn from its write-up, “Introducing Talpa Search.” Talpa is still experimental and is temporarily available to libraries already using Syndetics Unbound.

7 15 biijwirn

A book lover in action. Thanks MidJourney. You made me more appealing than I was in the 1951 when I got kicked out of the library for reading books for adults, not stuff about Freddy the Pig.

Participating libraries will get a year of the service for free. We cannot know just how much they will be saving, though, since the pricing remains a mystery. Writer Tim Spalding describes how Talpa works:

“First, Talpa queries large language models (from Claude AI and ChatGPT) for books and other media. Critically, every item is checked against true and authoritative bibliographic data, solving the problem of invented answers (called ‘hallucinations’) that such models can fall into. Second, Talpa uses the natural-language abilities of large language models to parse and understand queries, which are then answered using traditional library data. Thus a search for ‘novels about World War II in France’ is broken down into subjects and tags and answered with results from the library’s collection. Our authoritative book data comes from Syndetics Unbound, Bowker and LibraryThing. Surprisingly, Talpa’s ability to find books by their cover design isn’t powered by AI at all, but by the effort of thousands of book lovers who have played LibraryThing’s CoverGuess cover-tagging game since 2010!”

Interesting. If you don’t happen to be part of a library using Syndetics, you can try Talpa out at one of the three libraries linked to in the post. The tool sports a cute mole mascot and, to add a bit of personality, supplies mole facts beneath the search bar. As with many AI tools, the functionality has plenty of room to grow. For example, my search for “weaving velvet” did return a few loom-centered books scattered through the results but more prominently suggested works of fiction or philosophy that simply contained “velvet” in the title. (Including, adorably, several versions of “The Velveteen Rabbit.”) The write-up does not share when the tool will be available more widely, but we hope it will be more refined when it is. Is it AI? Isn’t everything?

Cynthia Murrell, July 19, 2023

Threads and Twitter: A Playground Battle for the Ages

July 18, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Twitter helped make some people famous. No big name publisher needed. Just an algorithm and a flow of snappy comments. Fame. Money. A platformer, sorry, I meant platform.

7 18 playground argument

Is informed, objective analysis of Facebook and Twitter needed? Sure, but the approach taken by some is more like an argument at a school picnic over the tug –of – war teams. Which team will end up with grass stains? Which will get the ribbon with the check mark? MidJourney developed this original art object.

Now that Twitter has gone Musky, those who may perceive themselves as entitled to a blue check, algorithmic love, and a big, free megaphone are annoyed. At least that’s how I understand “Five Reasons Threads Could Still Go the Distance.” This essay is about the great social media dust up between those who love Teslas and those who can find some grace in the Zuck.

Wait, wasn’t the Zuck the subject of some criticism? Cambridge Analytic-type activities and possibly some fancy dancing with the name of the company, the future of the metaverse, and expanding land holdings in Hawaii? Forget that.

I learned in the article, which is flavored with some business consulting advice from a famous social media personality:

It’s always a fool’s errand to judge the prospects of a new social network a couple weeks into its history.

So what is the essay about? Exactly.

I learned from the cited essay:

Twitter’s deterioration continues to accelerate. Ad revenue is down by 50 percent, according to Musk, and — despite the company choosing not to pay many of its bills — the company is losing money. Rate limits continue to make the site unusable to many free users, and even some paid ones. Spam is overwhelming users’ direct messages so much that the company disabled open DMs to free users. The company has lately been reduced to issuing bribe-like payouts to a handful of hand-picked creators, many of whom are aligned with right-wing politics. If that’s not a death spiral, what is?

Wow, a death spiral at the same time Threads may be falling in love with “rate limits.”

Can the Zuck can kill off Twitter. Here’s hoping. But there is only one trivial task to complete, according to the cited article:

To Zuckerberg, the concept has been proved out. The rest is simply an execution problem. [Emphasis added]

As that lovable influencer, social media maven, and management expert Peter Drucker observed:

What gets measured, gets managed.

Isn’t it early days for measurement? Instagram was a trampoline for Threads. The Musk managment modifications seem to be working exactly as the rocket scientist planned them to function. What’s billions in losses mean to a person whose rockets don’t blow up too often.

Several observations:

  1. Analyzing Threads and Twitter is a bit like a school yard argument, particularly when the respective big dogs want to fight in a cage in Las Vegas
  2. The possible annoyance or mild outrage from those who loved the good old free Twitter is palpable
  3. Social media remains an interesting manifestation of human behavior.

Net net: I find social media a troubling innovation. But it does create news which some find as vital as oxygen, water, and clicks. Yes, clicks. The objective I believe.

Stephen E Arnold, July 18, 2023

Sam the AI-Man Explains His Favorite Song, My Way, to the European Union

July 18, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

It seems someone  is uncomfortable with AI regulation despite asking for regulation. TIME posts this “Exclusive: OpenAI Lobbied the E.U. to Water Down AI Regulation.” OpenAI insists AI must be regulated posthaste. CEO Sam Altman even testified to congress about it. But when push comes to legislative action, the AI-man balks. At least when it affects his company. Reporter Billy Perrigo tells us:

“The CEO of OpenAI, Sam Altman, has spent the last month touring world capitals where, at talks to sold-out crowds and in meetings with heads of governments, he has repeatedly spoken of the need for global AI regulation. But behind the scenes, OpenAI has lobbied for significant elements of the most comprehensive AI legislation in the world—the E.U.’s AI Act—to be watered down in ways that would reduce the regulatory burden on the company.”

What, to Altman’s mind, makes OpenAI exempt from the much-needed regulation? Their product is a general-purpose AI, as opposed to a high-risk one. So it contributes to benign projects as well as consequential ones. How’s that for logic? Apparently it was good enough for EU regulators. Or maybe they just caved to OpenGI’s empty threat to pull out of Europe.

7 16 the rules I make

Is it true that Mr. AI-Man only follows the rules he promulgates? Thanks for the Leonardo-like image of students violating a university’s Keep Off the Grass rule.

We learn:

“The final draft of the Act approved by E.U. lawmakers did not contain wording present in earlier drafts suggesting that general purpose AI systems should be considered inherently high risk. Instead, the agreed law called for providers of so-called ‘foundation models,’ or powerful AI systems trained on large quantities of data, to comply with a smaller handful of requirements including preventing the generation of illegal content, disclosing whether a system was trained on copyrighted material, and carrying out risk assessments.”

Of course, all of this may be a moot point given the catch-22 of asking legislators to regulate technologies they do not understand. Tech companies’ lobbying dollars seem to provide the most clarity.

Cynthia Murrell, July 18, 2023

When Wizards Flail: The Mysteries of Smart Software

July 18, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

How about that smart software stuff? VCs are salivating. Whiz kids are emulating Sam AI-man. Users are hoping there is a job opening for a Wal-Mart greeter. But there is a hitch in the git along; specifically, some bright experts are not able to understand what smart software does to generate output. The cloud of unknowing is thick and has settled over the Land of Obfuscation.

Even the Scientists Who Build AI Can’t Tell You How It Works” has a particularly interesting kicker:

“We built it, we trained it, but we don’t know what it’s doing.”

7 15 ai math

A group of artificial intelligence engineers struggling with the question, “What the heck is the system doing?” A click of the slide rule for MidJourney for this dramatic depiction of AI wizards at work.

The write up (which is an essay-interview confection) includes some thought-provoking comments. Here are three; you can visit the cited article for more scintillating insights:

Item 1: “… with reinforcement learning, you say, “All right, make this entire response more likely because the user liked it, and make this entire response less likely because the user didn’t like it.”

Item 2: “… The other big unknown that’s connected to this is we don’t know how to steer these things or control them in any reliable way. We can kind of nudge them

Item 3: “We don’t have the concepts that map onto these neurons to really be able to say anything interesting about how they behave.”

Item 4: “… we can sort of take some clippers and clip it into that shape. But that doesn’t mean we understand anything about the biology of that tree.”

Item 5: “… because there’s so much we don’t know about these systems, I imagine the spectrum of positive and negative possibilities is pretty wide.”

For more of this type of “explanation,” please, consult the source document cited above.

Several observations:

  1. I like the nudge and watch approach. Humanoids learning about what their code does may be useful.
  2. The nudging is subjective (human skill) and the reference to growing a tree and not knowing how that works exactly. Just do the bonsai thing. Interesting but is it efficient? Will it work? Sure or at least as Silicon Valley thinking permits
  3. The wide spectrum of good and bad. My reaction is to ask the striking writers and actors what their views of the bad side of the deal is. What if the writers get frisky and start throwing spit balls or (heaven forbid) old IBM Selectric type balls. Scary.

Net net: Perhaps Google knows best? Tensors, big computers, need for money, and control of advertising — I think I know why Google tries so hard to frame the AI discussion. A useful exercise is to compare what Google’s winner in the smart software power struggle has to say about Google’s vision. You can find that PR emission at this link. Be aware that the interviewer’s questions are almost as long at the interview subject’s answers. Does either suggest downsides comparable to the five items cited in this blog post?

Stephen E Arnold, July 18, 2023

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta