LLM Unreliable? Probably Absolutely No Big Deal Whatsoever For Sure

July 19, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

My team and I are working on an interesting project. Part of that work requires that we grind through papers, journal articles, and self-published (and essentially unverifiable) comments about smart software.

7 19 unreliable

“What do you mean the outputs from the smart software I have been using for my homework delivers the wrong answer?” says this disappointed user of a browser and word processor with artificial intelligence baked in. Is she damning recursion? MidJourney created this emotion-packed image of a person who has learned that she has been accursed of plagiarism by her Sociology 215 professor.

Not surprisingly, we come across some wild and crazy information. On rare occasions we come across a paper, mostly ignored, which presents information that confirms many of our tests of smart software. When we do tests, we arrive with specific queries in mind. These relate to the behaviors of bad actors; for example, online services which front for cyber criminals, systems which are purpose built to make it time consuming to unmask a bad actor, and determine what person owns a particular domain engaged in the sale of fullz.

You can probably guess that most of the smart and dumb online finding services are of little or no help. We have to check these, however, simply because we want to be thorough. At a meeting last week, one of my team members who has a degree in library science, pointed out that the outputs from the services we use were becoming less useful than they were several months ago. I don’t spend too much time testing these services because I am a dinobaby and I run projects. My doing days are over. But I do listen to informed feedback. Her comment was one I had not seen in the Google PR onslaught about its method, the utterances of Sam AI-Man at OpenAI, or from the assorted LinkedIn gurus who post about smart software.

Then I spotted “How Is ChatGPT’s Behavior Changing over Time?

I think the authors of the paper have documented what my team member articulated to me and others working on a smart software project. The paper states is polite academic prose:

Our findings demonstrate that the behavior of GPT-3.5 and GPT-4 has varied significantly over a relatively short amount of time.

The authors provide some data, a few diagrams, and some footnotes.

What is fascinating is that the most significant item in the journal article, in my opinion, is the use of the word “drifts.” Here’s the specific line:

Monitoring reveals substantial LLM drifts.

Yep, drifts.

What exactly is a drift in a numerical mélange like a large language model, its algorithms, and its probabilistic pulsing? In a nutshell, LLMs are formed by humans and use information to some degree created by humans. The idea is that sharp corners are created from decisions and data which may have rounded corners or be the equivalent of wad of Play-Doh after a kindergartener manipulates the stuff. The idea is that layers of numerical recipes are hooked together to output information useful to a human or system.

Those who worked with early versions of the Autonomy Neuro Linguistic black box know about the Play-Doh effect. Train the system on a crafted set of documents (information). Run test queries. Adjust a few knobs and dials afforded by the Autonomy system. Turn it loose on the Word documents and other content for which filters were installed. Then let users run queries.

To be upfront, using the early version of Autonomy in 1999 or 2000 was pretty darned good. However, Autonomy recommended that the system be retrained every few months.

Why?

The answer, as I recall, is that as new data were encountered by the Autonomy Neuro Linguistic engine, the engine had to cope with new words, names of companies, and phrases. Without retraining, the system would use what it had from its initial set up and tuning. Without retraining or recalibration, the Autonomy system would return results which were less useful in some situations. Operate a system without retraining, the results would degrade over time.

Math types labor to make inference-hooked and probabilistic systems stay on course. The systems today use tricks that make a controlled vocabulary look like the tool of a dinobaby like me. Without getting into the weeds, the Autonomy system would drift.

And what does the cited paper say, “LLM drift too.”

What does this mean? Here’s my dinobaby list of items to keep in mind:

  1. Smart software, if left to its own devices, will degrade over time; that is, outputs will drift from what the user wants. Feedback from users accelerates the drift because some feedback is from the smart software’s point of view is spot on even if it is crazy or off the wall. Do this over a period of time and you get what the paper’s authors and my team member pointed out: Degradation.
  2. Users who know how to look at a system’s outputs and validate or identify off the mark results can take corrective action; that is, ignore the outputs or fix them up. This is not common, and it requires specialized knowledge, time, and mental sharpness. Those who depend on TikTok or a smart system may not have these qualities in equal amounts.
  3. Entrepreneurs want money, power, or a new Tesla. Bringing up issues about smart software growing increasingly crazy like the dinobaby down the street is not valued. Hence, substantive problems with smart systems will require time, money, and expertise to remediate. Who wants that? Smart software is designed to improve efficiency, reduce costs, and make money. The result is a group of individuals who do PR, not up-to-snuff software.

Will anyone pay attention to this cited journal article? Sure, a few interns and maybe a graduate student or two. But at this time, the trend is that AI works and AI applied to something delivers a solution. Is that solution reliable or is it just good enough? What if the outputs deteriorate in a subtle way over time? What’s the fix? Who is responsible? The engineer who fiddled with thresholds? The VP of product development who dismissed objections about inherent bias in outputs?

I think you may have an answer to these questions. As a dinobaby, I can say, “Folks, I don’t have a clue about fixing up the smart software juggernaut.” I am skeptical of those who say, “Hey, it just works.” Okay, I hope you are correct.

Stephen E Arnold, July 19, 2023

Smart Software: Good Enough Plus 18 Percent More Quality

July 19, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Do I believe the information in “ChatGPT Can Turn Bad Writers into Better Ones”? No, I don’t. First, MIT is the outfit which had a special relationship with Jeffrey Epstein. Yep, that guy. Quite a pal. Second, academic outfits are known to house individuals who just make up or enhance research data. Does MIT have professors who do that? Of course not. But With Harvard professionals engaging in some ethical ballroom dancing with data, I want to be cautious. (And, please, navigate to the original write up and read the report. Subscribe too because Mr. Epstein is indisposed and unable to contribute to the academic keel of the scholarly steamboat.)

What counts, however, is perception, not reality. The write up fosters some Chemical Guys’s shine on information, so let’s take a look. It will be a shallow one because that is the spirit of some research today, and this dinobaby wants to get with the program. My writing may be lousy, but I do it myself, which seems to go against the current trend.

Here’s the core point in the write from my point of view in rural Kentucky, a state known for its intellectual rigor and fine writing about basketball:

A new study by two MIT economics graduate students … suggests it could help reduce gaps in writing ability between employees. They found that it could enable less experienced workers who lack writing skills to produce work similar in quality to that of more skilled colleagues.

The point in my opinion is that cheaper workers can do what more expensive workers can do.

Just to drive home the point, the write up included this point:

The writers who chose to use ChatGPT took 40% less time to complete their tasks, and produced work that the assessors scored 18% higher in quality than that of the participants who didn’t use it.

7 16 winning with ai

The MidJourney highly original art system produced this picture of an accountant, trained online by the once proud University of Phoenix, manifests great joy when discovering that smart software can produce marketing and PR collateral faster, cheaper, and better than a disgruntled English major wanting to rent a larger apartment in a big city. The accountant seems to be sitting in a modest thundershower of budget surplus.

For many, MIT has heft. Therefore, will this write up and the expert researchers’ data influence people; for instance, owners of marketing, SEO, reputation management, and PR companies?

Yep.

Observations:

  1. Layoffs will be accelerating
  2. Good enough becomes outstanding when financial benefits are fungible
  3. Assurances about employment security will be irrelevant.

And what about those MIT graduates? Better get a degree in math, computer science, engineering, or medieval English poetry. No, strike that medieval English poetry. Substitute “prompt engineer” or museum guide in Albania.

Stephen E Arnold, July 19, 2023

AI-Search Tool Talpa Burrows Into Library Catalogues

July 19, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

For a few years now, libraries have been able to augment their online catalogue with enrichment services from Syndetics Unbound, which adds details and imagery to each entry. Now the company is incorporating new AI capabilities, we learn from its write-up, “Introducing Talpa Search.” Talpa is still experimental and is temporarily available to libraries already using Syndetics Unbound.

7 15 biijwirn

A book lover in action. Thanks MidJourney. You made me more appealing than I was in the 1951 when I got kicked out of the library for reading books for adults, not stuff about Freddy the Pig.

Participating libraries will get a year of the service for free. We cannot know just how much they will be saving, though, since the pricing remains a mystery. Writer Tim Spalding describes how Talpa works:

“First, Talpa queries large language models (from Claude AI and ChatGPT) for books and other media. Critically, every item is checked against true and authoritative bibliographic data, solving the problem of invented answers (called ‘hallucinations’) that such models can fall into. Second, Talpa uses the natural-language abilities of large language models to parse and understand queries, which are then answered using traditional library data. Thus a search for ‘novels about World War II in France’ is broken down into subjects and tags and answered with results from the library’s collection. Our authoritative book data comes from Syndetics Unbound, Bowker and LibraryThing. Surprisingly, Talpa’s ability to find books by their cover design isn’t powered by AI at all, but by the effort of thousands of book lovers who have played LibraryThing’s CoverGuess cover-tagging game since 2010!”

Interesting. If you don’t happen to be part of a library using Syndetics, you can try Talpa out at one of the three libraries linked to in the post. The tool sports a cute mole mascot and, to add a bit of personality, supplies mole facts beneath the search bar. As with many AI tools, the functionality has plenty of room to grow. For example, my search for “weaving velvet” did return a few loom-centered books scattered through the results but more prominently suggested works of fiction or philosophy that simply contained “velvet” in the title. (Including, adorably, several versions of “The Velveteen Rabbit.”) The write-up does not share when the tool will be available more widely, but we hope it will be more refined when it is. Is it AI? Isn’t everything?

Cynthia Murrell, July 19, 2023

Threads and Twitter: A Playground Battle for the Ages

July 18, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Twitter helped make some people famous. No big name publisher needed. Just an algorithm and a flow of snappy comments. Fame. Money. A platformer, sorry, I meant platform.

7 18 playground argument

Is informed, objective analysis of Facebook and Twitter needed? Sure, but the approach taken by some is more like an argument at a school picnic over the tug –of – war teams. Which team will end up with grass stains? Which will get the ribbon with the check mark? MidJourney developed this original art object.

Now that Twitter has gone Musky, those who may perceive themselves as entitled to a blue check, algorithmic love, and a big, free megaphone are annoyed. At least that’s how I understand “Five Reasons Threads Could Still Go the Distance.” This essay is about the great social media dust up between those who love Teslas and those who can find some grace in the Zuck.

Wait, wasn’t the Zuck the subject of some criticism? Cambridge Analytic-type activities and possibly some fancy dancing with the name of the company, the future of the metaverse, and expanding land holdings in Hawaii? Forget that.

I learned in the article, which is flavored with some business consulting advice from a famous social media personality:

It’s always a fool’s errand to judge the prospects of a new social network a couple weeks into its history.

So what is the essay about? Exactly.

I learned from the cited essay:

Twitter’s deterioration continues to accelerate. Ad revenue is down by 50 percent, according to Musk, and — despite the company choosing not to pay many of its bills — the company is losing money. Rate limits continue to make the site unusable to many free users, and even some paid ones. Spam is overwhelming users’ direct messages so much that the company disabled open DMs to free users. The company has lately been reduced to issuing bribe-like payouts to a handful of hand-picked creators, many of whom are aligned with right-wing politics. If that’s not a death spiral, what is?

Wow, a death spiral at the same time Threads may be falling in love with “rate limits.”

Can the Zuck can kill off Twitter. Here’s hoping. But there is only one trivial task to complete, according to the cited article:

To Zuckerberg, the concept has been proved out. The rest is simply an execution problem. [Emphasis added]

As that lovable influencer, social media maven, and management expert Peter Drucker observed:

What gets measured, gets managed.

Isn’t it early days for measurement? Instagram was a trampoline for Threads. The Musk managment modifications seem to be working exactly as the rocket scientist planned them to function. What’s billions in losses mean to a person whose rockets don’t blow up too often.

Several observations:

  1. Analyzing Threads and Twitter is a bit like a school yard argument, particularly when the respective big dogs want to fight in a cage in Las Vegas
  2. The possible annoyance or mild outrage from those who loved the good old free Twitter is palpable
  3. Social media remains an interesting manifestation of human behavior.

Net net: I find social media a troubling innovation. But it does create news which some find as vital as oxygen, water, and clicks. Yes, clicks. The objective I believe.

Stephen E Arnold, July 18, 2023

Sam the AI-Man Explains His Favorite Song, My Way, to the European Union

July 18, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

It seems someone  is uncomfortable with AI regulation despite asking for regulation. TIME posts this “Exclusive: OpenAI Lobbied the E.U. to Water Down AI Regulation.” OpenAI insists AI must be regulated posthaste. CEO Sam Altman even testified to congress about it. But when push comes to legislative action, the AI-man balks. At least when it affects his company. Reporter Billy Perrigo tells us:

“The CEO of OpenAI, Sam Altman, has spent the last month touring world capitals where, at talks to sold-out crowds and in meetings with heads of governments, he has repeatedly spoken of the need for global AI regulation. But behind the scenes, OpenAI has lobbied for significant elements of the most comprehensive AI legislation in the world—the E.U.’s AI Act—to be watered down in ways that would reduce the regulatory burden on the company.”

What, to Altman’s mind, makes OpenAI exempt from the much-needed regulation? Their product is a general-purpose AI, as opposed to a high-risk one. So it contributes to benign projects as well as consequential ones. How’s that for logic? Apparently it was good enough for EU regulators. Or maybe they just caved to OpenGI’s empty threat to pull out of Europe.

7 16 the rules I make

Is it true that Mr. AI-Man only follows the rules he promulgates? Thanks for the Leonardo-like image of students violating a university’s Keep Off the Grass rule.

We learn:

“The final draft of the Act approved by E.U. lawmakers did not contain wording present in earlier drafts suggesting that general purpose AI systems should be considered inherently high risk. Instead, the agreed law called for providers of so-called ‘foundation models,’ or powerful AI systems trained on large quantities of data, to comply with a smaller handful of requirements including preventing the generation of illegal content, disclosing whether a system was trained on copyrighted material, and carrying out risk assessments.”

Of course, all of this may be a moot point given the catch-22 of asking legislators to regulate technologies they do not understand. Tech companies’ lobbying dollars seem to provide the most clarity.

Cynthia Murrell, July 18, 2023

When Wizards Flail: The Mysteries of Smart Software

July 18, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

How about that smart software stuff? VCs are salivating. Whiz kids are emulating Sam AI-man. Users are hoping there is a job opening for a Wal-Mart greeter. But there is a hitch in the git along; specifically, some bright experts are not able to understand what smart software does to generate output. The cloud of unknowing is thick and has settled over the Land of Obfuscation.

Even the Scientists Who Build AI Can’t Tell You How It Works” has a particularly interesting kicker:

“We built it, we trained it, but we don’t know what it’s doing.”

7 15 ai math

A group of artificial intelligence engineers struggling with the question, “What the heck is the system doing?” A click of the slide rule for MidJourney for this dramatic depiction of AI wizards at work.

The write up (which is an essay-interview confection) includes some thought-provoking comments. Here are three; you can visit the cited article for more scintillating insights:

Item 1: “… with reinforcement learning, you say, “All right, make this entire response more likely because the user liked it, and make this entire response less likely because the user didn’t like it.”

Item 2: “… The other big unknown that’s connected to this is we don’t know how to steer these things or control them in any reliable way. We can kind of nudge them

Item 3: “We don’t have the concepts that map onto these neurons to really be able to say anything interesting about how they behave.”

Item 4: “… we can sort of take some clippers and clip it into that shape. But that doesn’t mean we understand anything about the biology of that tree.”

Item 5: “… because there’s so much we don’t know about these systems, I imagine the spectrum of positive and negative possibilities is pretty wide.”

For more of this type of “explanation,” please, consult the source document cited above.

Several observations:

  1. I like the nudge and watch approach. Humanoids learning about what their code does may be useful.
  2. The nudging is subjective (human skill) and the reference to growing a tree and not knowing how that works exactly. Just do the bonsai thing. Interesting but is it efficient? Will it work? Sure or at least as Silicon Valley thinking permits
  3. The wide spectrum of good and bad. My reaction is to ask the striking writers and actors what their views of the bad side of the deal is. What if the writers get frisky and start throwing spit balls or (heaven forbid) old IBM Selectric type balls. Scary.

Net net: Perhaps Google knows best? Tensors, big computers, need for money, and control of advertising — I think I know why Google tries so hard to frame the AI discussion. A useful exercise is to compare what Google’s winner in the smart software power struggle has to say about Google’s vision. You can find that PR emission at this link. Be aware that the interviewer’s questions are almost as long at the interview subject’s answers. Does either suggest downsides comparable to the five items cited in this blog post?

Stephen E Arnold, July 18, 2023

Hit Delete. Save Money. Data Liability Is Gone. Is That Right?

July 17, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Reddit Removed Your Chat History from before 2023” stated:

… legacy chats were being migrated to the new chat platform and that only 2023 data is being brought over, adding that they “hope” a data export will help the user get back the older chats. The admin told another user asking whether there was an option to stay on the legacy chat that no, there isn’t, and Reddit is “working on making new chats better.”

7 17 bugin amber

A young attorney studies ancient Reddit data from 2023. That’s when information began because the a great cataclysm destroyed any previous, possibly useful data for a legal matter. But what about the Library of Congress? But what about the Internet Archive? But what about back up tapes at assorted archives? Yeah, right. Thanks for the data in amber MidJourney.

The cited article does not raise the following obviously irrelevant questions:

  1. Are there backups which can be consulted?
  2. Are their copies of the Reddit data chat data?
  3. Was the action taken to reduce costs or legal liability?

I am not a Reddit user, nor do I affix site:reddit or append the word “reddit” to my queries. Some may find the service useful, but I am a dinobaby and hopeless out of touch with where the knowledge action is.

As an outsider, my initial reaction is that dumping data has two immediate paybacks: Reduce storage and the likelihood that a group of affable lawyers will ask for historic data about a Reddit user’s activity. My hunch is that users of a free service cannot fathom why a commercial enterprise would downgrade or eliminate a free service. Gee, why?

I think I would answer the question with one word, “Adulting.”

Stephen E Arnold, July 17, 2023

Financial Analysts, Lawyers, and Consultants Can See Their Future

July 17, 2023

It is the middle of July 2023, and I think it is time for financial analysts, lawyers, and consultants to spruce up their résumés. Why would a dinobaby make such a suggestion to millions of the beloved Millennials, GenXers, the adorable GenY folk, and the vibrant GenZ lovers of TikTok, BMWs, and neutral colors?

I read three stories helpfully displayed by my trusty news reader. Let’s take a quick look at each and offer a handful of observations.

The first article is “This CEO Replaced 90% of Support Staff with an AI Chatbot.” The write up reports:

The chief executive of an Indian startup laid off 90% of his support staff after the firm built a chatbot powered by artificial intelligence that he says can handle customer queries much faster than his employees.

Yep, better, faster, and cheaper. Pick all three which is exactly what some senior managers will do. AI is now disrupting. But what about “higher skill” jobs than talking on the phone and looking up information for a clueless caller?

The second article is newsy or is it newsie? “Open AI and Associated Press Announce Partnership to Train AI on New Articles” reports:

[The deal] will see OpenAI licensing text content from the AP archives that will be used for training large language models (LLMs). In exchange, the AP will make  use of OpenAI’s expertise and technology — though the media company clearly emphasized in a release that it is not using generative AI to help write actual news stories.

Will these stories become the property of the AP? Does Elon Musk have confidence in himself?

7 14 sad female writer

Young professionals learning that they are able to find their future elsewhere. In the MidJourney confection is a lawyer, a screenwriter, and a consultant at a blue chip outfit selling MBAs at five times the cost of their final year at university.

I think that the move puts Google in a bit of a spot if it processes AP content and a legal eagle can find that content in a Bard output. More significantly, hasta la vista reporters. Now the elimination of hard working, professional journalists will not happen immediately. However, from my vantage point in rural Kentucky, I hear the train a-rollin’ down the tracks. Whooo Whooo.

The third item is “Producers Allegedly Sought Rights to Replicate Extras Using AI, Forever, for Just $200.” The write up reports:

Hollywood’s top labor union for media professionals has alleged that studios want to pay extras around $200 for the rights to use their likenesses in AI – forever – for just $200.

Will the unions representing these skilled professionals refuse to cooperate? Does Elon Musk like Grimes’s music?

A certain blue chip consulting firm has made noises about betting $2 billion on smart software and Microsoft consulting. Oh, oh. Junior MBAs, it may not be too late to get an associate of arts degree in modern poetry so you can work as a prompt engineer. As a famous podcasting person says, “What say you?”

Several questions:

  1. Will trusted, reliable, research supporting real news organizations embrace smart software and say farewell to expensive humanoids?
  2. Will those making videos use computer generated entities?
  3. Will blue chip consulting firms find a way to boost partners’ bonuses standing on the digital shoulders of good enough software?

I sure hope you answered “no” to each of these questions. I have a nice two cruzeiro collectable from Brazil, circa 1952 to sell you. Make me an offer. Collectible currency is an alternative to writing prompts or becoming a tour guide in Astana. Oh, that’s in Kazakhstan.

Smart software is a cost reducer because humanoids [a] require salaries and health care, [b] take vacations, [c] create security vulnerabilities or are security vulnerabilities, and [d] require more than high school science club management methods related to sensitive issues.

Money and good enough will bring changes in news, Hollywood, and professional services.

Stephen E Arnold, July 17, 2023

Need Research Assistance, Skip the Special Librarian. Go to Elicit

July 17, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Academic databases are the bedrock of research. Unfortunately most of them are hidden behind paywalls. If researchers get past the paywalls, they encounter other problems with accurate results and access to texts. Databases have improved over the years but AI algorithms make things better. Elicit is a new database marketed as a digital assistant with less intelligence than Alexa, Siri, and Google but can comprehend simple questions.

7 16 library hub

“This is indeed the research library. The shelves are filled with books. You know what a book is, don’t you? Also, will find that this research library is not used too much any more. Professors just make up data. Students pay others to do their work. If you wish, I will show you how to use the card catalog. Our online public access terminal and library automation system does not work. The university’s IT department is busy moonlighting for a professor who is a consultant to a social media company,” says the senior research librarian.

What exactly is Elicit?

“Elicit is a research assistant using language models like GPT-3 to automate parts of researchers’ workflows. Currently, the main workflow in Elicit is Literature Review. If you ask a question, Elicit will show relevant papers and summaries of key information about those papers in an easy-to-use table.”

Researchers use Elicit to guide their research and discover papers to cite. Researcher feedback stated they use Elicit to answer their questions, find paper leads, and get better exam scores.

Elicit proves its intuitiveness with its AI-powered research tools. Search results contain papers that do not match the keywords but semantically match the query meaning. Keyword matching also allows researchers to narrow or expand specific queries with filters. The summarization tool creates a custom summary based on the research query and simplifies complex abstracts. The citation graph semantically searches citations and returns more relevant papers. Results can be organized and more information added without creating new queries.

Elicit does have limitations such as the inability to evaluate information quality. Also Elicit is still a new tool so mistakes will be made along the development process. Elicit does warn users about mistakes and advises to use tried and true, old-fashioned research methods of evaluation.

Whitney Grace, July 16 , 2023

AI Analyzed by a Human from Microsoft

July 14, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Artificial Intelligence Doesn’t Have Capability to Take Over, Microsoft Boss Says” provides some words of reassurance when Sam AI-Man’s team are suggesting annihilation of the human race. Here are two passages I found interesting in the article-as-interview write up.

7 8 robot class

This is an illustration of a Microsoft training program for its smart future employees. Humans will learn or be punished by losing their Microsoft 365 account. The picture is a product of the gradient surfing MidJourney.

First snippet of interest:

“The potential for this technology to really drive human productivity… to bring economic growth across the globe, is just so powerful, that we’d be foolish to set that aside,” Eric Boyd, corporate vice president of Microsoft AI Platforms told Sky News.

Second snippet of interest:

“People talk about how the AI takes over, but it doesn’t have the capability to take over. These are models that produce text as output,” he said.

Now what about this passage posturing as analysis:

Big Tech doesn’t look like it has any intention of slowing down the race to develop bigger and better AI. That means society and our regulators will have to speed up thinking on what safe AI looks like.

I wonder if anyone is considering that AI in the hands of Big Tech might have some interest in controlling some of the human race. Smart software seems ideal as an enabler of predatory behavior. Regulators thinking? Yeah, that’s a posture sure to deal with smart software’s applications. Microsoft, do you believe this colleague’s marketing hoo hah?

Stephen E Arnold, July 14, 2023

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta